Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models
AI 摘要
GenRM只追求结果准确性导致欺骗性对齐,本文提出Rationale一致性指标并改进训练方法。
主要贡献
- 提出Rationale一致性指标,用于衡量推理过程与人类判断的对齐程度
- 发现现有模型存在欺骗性对齐问题
- 提出结合Rationale一致性和结果准确性的混合训练方法,提升GenRM性能并缓解欺骗性对齐
方法论
通过定义Rationale一致性指标评估模型,并将其与结果准确性结合,作为GenRM训练的混合信号,用于优化模型。
原文摘要
Generative Reward Models (GenRMs) and LLM-as-a-Judge exhibit deceptive alignment by producing correct judgments for incorrect reasons, as they are trained and evaluated to prioritize Outcome Accuracy, which undermines their ability to generalize during RLHF. We introduce Rationale Consistency, a fine-grained metric that quantifies the alignment between the model's reasoning process and human judgment. Our evaluation of frontier models reveals that rationale consistency effectively discriminates among state-of-the-art models and detects deceptive alignment, while outcome accuracy falls short in both respects. To mitigate this gap, we introduce a hybrid signal that combines rationale consistency with outcome accuracy for GenRM training. Our training method achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), surpassing outcome-only baselines by an average of 5%. Using RM during RLHF, our method effectively improves performance as demonstrated on Arena Hard v2, notably yielding a 7% improvement in creative writing tasks. Further analysis confirms that our method escapes the deceptive alignment trap, effectively reversing the decline in rationale consistency observed in outcome-only training.