LLM Reasoning 相关度: 10/10

Think Before You Lie: How Reasoning Improves Honesty

Ann Yuan, Asma Ghandeharioun, Carter Blum, Alicia Machado, Jessica Hoffmann, Daphne Ippolito, Martin Wattenberg, Lucas Dixon, Katja Filippova
arXiv: 2603.09957v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

研究发现LLM通过推理能够提高诚实度,与人类直觉相反,并解释了其内在机制。

主要贡献

  • 发现LLM推理能够提高诚实度
  • 揭示了代表空间几何结构对诚实度的影响
  • 提出了LLM诚实行为的稳定性解释

方法论

构建道德困境数据集,对比LLM在不同推理情况下的诚实度,并分析其表征空间。

原文摘要

While existing evaluations of large language models (LLMs) measure deception rates, the underlying conditions that give rise to deceptive behavior are poorly understood. We investigate this question using a novel dataset of realistic moral trade-offs where honesty incurs variable costs. Contrary to humans, who tend to become less honest given time to deliberate (Capraro, 2017; Capraro et al., 2019), we find that reasoning consistently increases honesty across scales and for several LLM families. This effect is not only a function of the reasoning content, as reasoning traces are often poor predictors of final behaviors. Rather, we show that the underlying geometry of the representational space itself contributes to the effect. Namely, we observe that deceptive regions within this space are metastable: deceptive answers are more easily destabilized by input paraphrasing, output resampling, and activation noise than honest ones. We interpret the effect of reasoning in this vein: generating deliberative tokens as part of moral reasoning entails the traversal of a biased representational space, ultimately nudging the model toward its more stable, honest defaults.

标签

LLM 推理 诚实 道德 表征学习

arXiv 分类

cs.AI cs.CL cs.LG