LLM Reasoning 相关度: 8/10

Beyond Holistic Scores: Automatic Trait-Based Quality Scoring of Argumentative Essays

Lucile Favero, Juan Antonio Pérez-Ortiz, Tanja Käser, Nuria Oliver
arXiv: 2602.04604v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

论文研究了基于特征的自动议论文评分,提升了评分的解释性和教育实用性。

主要贡献

  • 提出了基于小规模LLM的结构化上下文学习方法
  • 提出了基于BigBird模型的CORAL风格序数回归方法
  • 验证了序数建模在议论文评分中的有效性

方法论

采用了小规模LLM的上下文学习和基于BigBird的监督学习方法,并使用CORAL框架显式建模了分数的序数性。

原文摘要

Automated Essay Scoring systems have traditionally focused on holistic scores, limiting their pedagogical usefulness, especially in the case of complex essay genres such as argumentative writing. In educational contexts, teachers and learners require interpretable, trait-level feedback that aligns with instructional goals and established rubrics. In this paper, we study trait-based Automatic Argumentative Essay Scoring using two complementary modeling paradigms designed for realistic educational deployment: (1) structured in-context learning with small open-source LLMs, and (2) a supervised, encoder-based BigBird model with a CORAL-style ordinal regression formulation, optimized for long-sequence understanding. We conduct a systematic evaluation on the ASAP++ dataset, which includes essay scores across five quality traits, offering strong coverage of core argumentation dimensions. LLMs are prompted with designed, rubric-aligned in-context examples, along with feedback and confidence requests, while we explicitly model ordinality in scores with the BigBird model via the rank-consistent CORAL framework. Our results show that explicitly modeling score ordinality substantially improves agreement with human raters across all traits, outperforming LLMs and nominal classification and regression-based baselines. This finding reinforces the importance of aligning model objectives with rubric semantics for educational assessment. At the same time, small open-source LLMs achieve a competitive performance without task-specific fine-tuning, particularly for reasoning-oriented traits, while enabling transparent, privacy-preserving, and locally deployable assessment scenarios. Our findings provide methodological, modeling, and practical insights for the design of AI-based educational systems that aim to deliver interpretable, rubric-aligned feedback for argumentative writing.

标签

自动评分 议论文 自然语言处理 教育

arXiv 分类

cs.CL