AI Agents 相关度: 8/10

Biased Error Attribution in Multi-Agent Human-AI Systems Under Delayed Feedback

Teerthaa Parakh, Karen M. Feigh
arXiv: 2603.23419v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

研究延迟反馈下多人-AI系统中,人类决策的偏差归因问题。

主要贡献

  • 揭示了多Agent系统中延迟反馈下人类的归因偏差。
  • 发现人类对损失的纠正调整比收益更强烈。
  • 强调了增强因果理解的决策支持系统的重要性。

方法论

通过控制的、基于游戏的实验,分析参与者在积极和消极结果后如何调整行为。

原文摘要

Human decision-making is strongly influenced by cognitive biases, particularly under conditions of uncertainty and risk. While prior work has examined bias in single-step decisions with immediate outcomes and in human interaction with a single autonomous agent, comparatively little attention has been paid to decision-making under delayed outcomes involving multiple AI agents, where decisions at each step affect subsequent states. In this work, we study how delayed outcomes shape decision-making and responsibility attribution in a multi-agent human-AI task. Using a controlled game-based experiment, we analyze how participants adjust their behavior following positive and negative outcomes. We observe asymmetric responses to gains and losses, with stronger corrective adjustments after negative outcomes. Importantly, participants often fail to correctly identify the actions that caused failure and misattribute responsibility across AI agents, leading to systematic revisions of decisions that are weakly related to the underlying causes of poor performance. We refer to this phenomenon as a form of attribution bias, manifested as biased error attribution under delayed feedback. Our findings highlight how cognitive biases can be amplified in human-AI systems with delayed outcomes and multiple autonomous agents, underscoring the need for decision-support systems that better support causal understanding and learning over time.

标签

human-AI interaction cognitive bias multi-agent systems delayed feedback attribution bias

arXiv 分类

cs.HC cs.AI