LLM Reasoning 相关度: 9/10

CAMEL: Confidence-Gated Reflection for Reward Modeling

Zirui Zhu, Hailun Xu, Yang Luo, Yong Liu, Kanchan Sarkar, Kun Xu, Yang You
arXiv: 2602.20670v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

CAMEL通过置信度门控反射和反事实增强,提升奖励模型的准确性和效率。

主要贡献

  • 提出一种置信度门控反射框架CAMEL
  • 引入反事实前缀增强进行模型训练
  • 在奖励模型基准测试中取得SOTA性能

方法论

首先进行轻量级单token偏好决策,仅对低置信度实例调用反射,通过强化学习和反事实增强进行训练。

原文摘要

Reward models play a fundamental role in aligning large language models with human preferences. Existing methods predominantly follow two paradigms: scalar discriminative preference models, which are efficient but lack interpretability, and generative judging models, which offer richer reasoning at the cost of higher computational overhead. We observe that the log-probability margin between verdict tokens strongly correlates with prediction correctness, providing a reliable proxy for instance difficulty without additional inference cost. Building on this insight, we propose CAMEL, a confidence-gated reflection framework that performs a lightweight single-token preference decision first and selectively invokes reflection only for low-confidence instances. To induce effective self-correction, we train the model via reinforcement learning with counterfactual prefix augmentation, which exposes the model to diverse initial verdicts and encourages genuine revision. Empirically, CAMEL achieves state-of-the-art performance on three widely used reward-model benchmarks with 82.9% average accuracy, surpassing the best prior model by 3.2% and outperforming 70B-parameter models using only 14B parameters, while establishing a strictly better accuracy-efficiency Pareto frontier.

标签

奖励模型 置信度门控 强化学习 反事实增强

arXiv 分类

cs.CL cs.AI