Process Supervision for Chain-of-Thought Reasoning via Monte Carlo Net Information Gain
AI 摘要
提出了一种利用信息增益自动生成CoT推理步骤标签的方法,用于提升LLM推理的可靠性和效率。
主要贡献
- 提出基于信息论的自动步骤标签生成方法
- 降低了计算复杂度至O(N)
- 提升了CoT推理在多种任务上的性能
方法论
利用蒙特卡洛方法估计每个推理步骤对正确答案概率的影响,计算信息增益作为步骤质量的信号。
原文摘要
Multi-step reasoning improves the capabilities of large language models (LLMs) but increases the risk of errors propagating through intermediate steps. Process reward models (PRMs) mitigate this by scoring each step individually, enabling fine-grained supervision and improved reliability. Existing methods for training PRMs rely on costly human annotations or computationally intensive automatic labeling. We propose a novel approach to automatically generate step-level labels using Information Theory. Our method estimates how each reasoning step affects the likelihood of the correct answer, providing a signal of step quality. Importantly, it reduces computational complexity to $\mathcal{O}(N)$, improving over the previous $\mathcal{O}(N \log N)$ methods. We demonstrate that these labels enable effective chain-of-thought selection in best-of-$K$ evaluation settings across diverse reasoning benchmarks, including mathematics, Python programming, SQL, and scientific question answering. This work enables scalable and efficient supervision of LLM reasoning, particularly for tasks where error propagation is critical.