Paper presentation: “Improving AI Text Classification: A Cascaded Approach”
Date:
LLMs have rapidly evolved into versatile “foundation models”, repurposed - despite persistent gaps in reliability - for a variety of tasks, such as legal document summarization, medical question answering, and text classification. In this paper, we propose an approach to engineer better text classification solutions for educational grading. We address this challenge with a solution that couples (i) a transformer cascade for rubric-level prediction with (ii) a transparent, traffic-light feedback interface powered by a Mixture-of-Agents LLM system. We compared our approach to a standard LLM and a single transformer architecture using the ASAG dataset. Results show that our approach increases recall for incorrect answers by more than 50% and precision on fully correct answers by 20% compared to a single transformer. Finally, we describe a prototype implementing our approach in an end-to-end, minimally intrusive solution for semi-automatic grading, which allows the teaching staff to review and revise the feedback generated by a Mixture-of-Agents LLM system based on the grade classification.