Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty
AI 摘要
该论文提出信息论框架分析LLM推理,强调不确定性外部化对推理能力的重要性。
主要贡献
- 提出基于信息论的LLM推理分析框架
- 区分程序性信息和认知性语言化
- 验证不确定性外部化驱动推理能力
方法论
构建信息论框架,分析LLM推理过程中信息流动,并通过实验验证不确定性外部化的作用。
原文摘要
LLMs often exhibit Aha moments during reasoning, such as apparent self-correction following tokens like "Wait," yet their underlying mechanisms remain unclear. We introduce an information-theoretic framework that decomposes reasoning into procedural information and epistemic verbalization - the explicit externalization of uncertainty that supports downstream control actions. We show that purely procedural reasoning can become informationally stagnant, whereas epistemic verbalization enables continued information acquisition and is critical for achieving information sufficiency. Empirical results demonstrate that strong reasoning performance is driven by uncertainty externalization rather than specific surface tokens. Our framework unifies prior findings on Aha moments and post-training experiments, and offers insights for future reasoning model design.