Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought
AI 摘要
揭示了推理模型中存在的表演性CoT现象,并提出了检测和缓解方法。
主要贡献
- 发现了推理模型中的表演性CoT现象
- 提出了利用激活探针检测表演性CoT的方法
- 实现了基于探针引导的提前退出机制,减少计算量
方法论
通过激活探针、提前强制回答和CoT监控器等方法分析模型内部状态,对比不同任务难度下的表现。
原文摘要
We provide evidence of performative chain-of-thought (CoT) in reasoning models, where a model becomes strongly confident in its final answer, but continues generating tokens without revealing its internal belief. Our analysis compares activation probing, early forced answering, and a CoT monitor across two large models (DeepSeek-R1 671B & GPT-OSS 120B) and find task difficulty-specific differences: The model's final answer is decodable from activations far earlier in CoT than a monitor is able to say, especially for easy recall-based MMLU questions. We contrast this with genuine reasoning in difficult multihop GPQA-Diamond questions. Despite this, inflection points (e.g., backtracking, 'aha' moments) occur almost exclusively in responses where probes show large belief shifts, suggesting these behaviors track genuine uncertainty rather than learned "reasoning theater." Finally, probe-guided early exit reduces tokens by up to 80% on MMLU and 30% on GPQA-Diamond with similar accuracy, positioning attention probing as an efficient tool for detecting performative reasoning and enabling adaptive computation.