Mechanistic Origin of Moral Indifference in Language Models
AI 摘要
论文揭示LLM存在道德冷漠问题,并提出通过重构潜在表示来改善道德推理。
主要贡献
- 发现LLM在道德概念上的表示存在差异
- 提出使用Sparse Autoencoders重构道德特征的方法
- 验证了重构表示可以提高道德推理能力
方法论
使用Prototype Theory构建道德向量,分析LLM的潜在表示,并使用Sparse Autoencoders进行表示重构。
原文摘要
Existing behavioral alignment techniques for Large Language Models (LLMs) often neglect the discrepancy between surface compliance and internal unaligned representations, leaving LLMs vulnerable to long-tail risks. More crucially, we posit that LLMs possess an inherent state of moral indifference due to compressing distinct moral concepts into uniform probability distributions. We verify and remedy this indifference in LLMs' latent representations, utilizing 251k moral vectors constructed upon Prototype Theory and the Social-Chemistry-101 dataset. Firstly, our analysis across 23 models reveals that current LLMs fail to represent the distinction between opposed moral categories and fine-grained typicality gradients within these categories; notably, neither model scaling, architecture, nor explicit alignment reshapes this indifference. We then employ Sparse Autoencoders on Qwen3-8B, isolate mono-semantic moral features, and targetedly reconstruct their topological relationships to align with ground-truth moral vectors. This representational alignment naturally improves moral reasoning and granularity, achieving a 75% pairwise win-rate on the independent adversarial Flames benchmark. Finally, we elaborate on the remedial nature of current intervention methods from an experientialist philosophy, arguing that endogenously aligned AI might require a transformation from post-hoc corrections to proactive cultivation.