LLM Reasoning 相关度: 9/10

Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

Saugata Purkayastha, Pranav Kushare, Pragya Paramita Pal, Sukannya Purkayastha
arXiv: 2603.09434v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

该论文揭示了LLM在道德推理中易忽略常识矛盾的现象,并发现了叙事焦点偏差。

主要贡献

  • 提出了CoMoral基准数据集
  • 揭示了LLM在道德推理中常识理解的不足
  • 发现了LLM的叙事焦点偏差现象

方法论

构建包含常识矛盾的道德困境数据集,评估不同规模LLM在此数据集上的表现,并分析错误原因。

原文摘要

Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these models remain both morally grounded and knowledge-aware. In this work, we uncover a critical limitation of current LLMs -- their tendency to prioritize moral reasoning over commonsense understanding. To investigate this phenomenon, we introduce CoMoral, a novel benchmark dataset containing commonsense contradictions embedded within moral dilemmas. Through extensive evaluation of ten LLMs across different model sizes, we find that existing models consistently struggle to identify such contradictions without prior signal. Furthermore, we observe a pervasive narrative focus bias, wherein LLMs more readily detect commonsense contradictions when they are attributed to a secondary character rather than the primary (narrator) character. Our comprehensive analysis underscores the need for enhanced reasoning-aware training to improve the commonsense robustness of large language models.

标签

LLM 道德推理 常识推理 偏差

arXiv 分类

cs.CL cs.AI