LLM Reasoning 相关度: 9/10

DiSCTT: Consensus-Guided Self-Curriculum for Efficient Test-Time Adaptation in Reasoning

Mohammad Mahdi Moradi, Sudhir Mudur
arXiv: 2603.05357v1 发布: 2026-03-05 更新: 2026-03-05

AI 摘要

DiSCTT利用共识引导的自步学习提升大模型在推理中的测试时自适应性能。

主要贡献

  • 提出难度感知的共识引导自步学习框架DiSCTT
  • 使用采样轨迹的一致性估计实例难度
  • 通过监督微调和强化学习优化高低共识输入

方法论

根据推理轨迹共识度划分难易样本,分别使用监督学习和强化学习进行测试时自适应。

原文摘要

Test-time adaptation offers a promising avenue for improving reasoning performance in large language models without additional supervision, but existing approaches often apply a uniform optimization objective across all inputs, leading to inefficient or unstable adaptation on heterogeneous reasoning problems. We propose DiSCTT, a difficulty-aware, consensus-guided self-curriculum framework that dynamically allocates test-time optimization strategies based on instance-level epistemic uncertainty estimated from agreement among sampled reasoning trajectories. Inputs with high consensus are consolidated via supervised fine-tuning using majority-agreed solutions as pseudo-labels, while low-consensus inputs are optimized via reinforcement learning with a consensus-regularized objective that encourages diversity under relevance constraints. Across a broad suite of mathematical and general reasoning benchmarks, DiSCTT consistently outperforms strong test-time adaptation baselines, achieving higher accuracy with reduced variance and substantially lower computation and wall-clock training times. These results demonstrate that explicitly accounting for instance difficulty and uncertainty enables more stable, efficient, and effective test-time adaptation for reasoning models.

标签

测试时自适应 自步学习 推理 强化学习

arXiv 分类

cs.CL