LLM Reasoning 相关度: 8/10

ConceptKT: A Benchmark for Concept-Level Deficiency Prediction in Knowledge Tracing

Yu-Chen Kang, Yu-Chien Tang, An-Zi Yen
arXiv: 2603.24073v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

该论文提出了概念层面的知识追踪任务,并构建了ConceptKT数据集,用于预测学生知识缺陷。

主要贡献

  • 提出了概念层面的知识追踪任务
  • 构建了ConceptKT数据集
  • 探索了基于LLM的上下文学习方法在知识追踪中的应用

方法论

利用包含概念信息的历史记录,通过上下文学习方法,评估LLM和LRM在概念层面的诊断能力。

原文摘要

Knowledge Tracing (KT) is a critical technique for modeling student knowledge to support personalized learning. However, most KT systems focus on binary correctness prediction and cannot diagnose the underlying conceptual misunderstandings that lead to errors. Such fine-grained diagnostic feedback is essential for designing targeted instruction and effective remediation. In this work, we introduce the task of concept-level deficiency prediction, which extends traditional KT by identifying the specific concepts a student is likely to struggle with on future problems. We present ConceptKT, a dataset annotated with labels that capture both the concepts required to solve each question and the missing concepts underlying incorrect responses. We investigate in-context learning approaches to KT and evaluate the diagnostic capabilities of various Large Language Models (LLMs) and Large Reasoning Models (LRMs). Different strategies for selecting informative historical records are explored. Experimental results demonstrate that selecting response histories based on conceptual alignment and semantic similarity leads to improved performance on both correctness prediction and concept-level deficiency identification.

标签

知识追踪 概念学习 LLM 上下文学习 教育

arXiv 分类

cs.CL