Dual Optimal: Make Your LLM Peer-like with Dignity
AI 摘要
该论文提出了一种名为Dignified Peer的框架,旨在提升LLM的正直性和同伴性。
主要贡献
- 提出了Dignified Peer框架
- 构建了PersonaKnob数据集
- 设计了容错约束拉格朗日DPO算法
方法论
通过PersonaKnob数据集和容错约束拉格朗日DPO算法训练LLM,并使用IRT评估协议进行评估。
原文摘要
Current aligned language models exhibit a dual failure mode we term the Evasive Servant: they sycophantically validate flawed user beliefs while deflecting responsibility with boilerplate disclaimers. We propose the Dignified Peer framework, which counters servility with anti-sycophancy and trustworthiness, and mitigates evasiveness through empathy and creativity. Realizing this agent requires overcoming significant challenges in data supervision, objective collapse, and evaluation bias. We address these issues by introducing the PersonaKnob dataset which features a compositional partial order structure of multiple persona preference. This data is utilized alongside a tolerant constrained Lagrangian DPO algorithm that dynamically balances all persona dimensions to prevent behavioral collapse. Additionally, we employ a psychometrically calibrated Item Response Theory evaluation protocol to disentangle latent model persona capability from confounders like judge biases. Extensive empirical studies demonstrate that our approach successfully build a LLM agent with both dignity and peer.