Conformal Thinking: Risk Control for Reasoning on a Compute Budget
AI 摘要
提出一种在计算预算下控制LLM推理风险的框架,优化计算效率。
主要贡献
- 提出基于风险控制的LLM推理预算设定框架
- 引入上限和下限阈值来控制推理过程
- 利用分布无关的风险控制方法优化阈值
- 引入效率损失来选择最高效的退出机制
方法论
利用验证集和目标风险,通过优化上限和下限阈值,控制推理过程,并采用效率损失选择退出机制。
原文摘要
Reasoning Large Language Models (LLMs) enable test-time scaling, with dataset-level accuracy improving as the token budget increases, motivating adaptive reasoning -- spending tokens when they improve reliability and stopping early when additional computation is unlikely to help. However, setting the token budget, as well as the threshold for adaptive reasoning, is a practical challenge that entails a fundamental risk-accuracy trade-off. We re-frame the budget setting problem as risk control, limiting the error rate while minimizing compute. Our framework introduces an upper threshold that stops reasoning when the model is confident (risking incorrect output) and a novel parametric lower threshold that preemptively stops unsolvable instances (risking premature stoppage). Given a target risk and a validation set, we use distribution-free risk control to optimally specify these stopping mechanisms. For scenarios with multiple budget controlling criteria, we incorporate an efficiency loss to select the most computationally efficient exiting mechanism. Empirical results across diverse reasoning tasks and models demonstrate the effectiveness of our risk control approach, demonstrating computational efficiency gains from the lower threshold and ensemble stopping mechanisms while adhering to the user-specified risk target.