AI Agents 相关度: 9/10

Emulating Aggregate Human Choice Behavior and Biases with GPT Conversational Agents

Stephen Pilli, Vivek Nallur
arXiv: 2602.05597v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

论文研究GPT模型在模拟人类决策偏差和交互行为方面的能力,结果表明GPT模型能较好地复现人类偏差。

主要贡献

  • 验证了GPT模型在交互环境中模拟人类决策偏差的能力
  • 分析了不同GPT模型在对齐人类行为方面的差异
  • 提出了设计偏差感知AI系统的建议

方法论

通过人类实验收集数据,并利用GPT-4和GPT-5模型模拟实验情境,对比模型与人类的决策行为。

原文摘要

Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the individual level and emulate the dynamics of biased human behavior when contextual factors, such as cognitive load, interact with these biases. We adapted three well-established decision scenarios into a conversational setting and conducted a human experiment (N=1100). Participants engaged with a chatbot that facilitates decision-making through simple or complex dialogues. Results revealed robust biases. To evaluate how LLMs emulate human decision-making under similar interactive conditions, we used participant demographics and dialogue transcripts to simulate these conditions with LLMs based on GPT-4 and GPT-5. The LLMs reproduced human biases with precision. We found notable differences between models in how they aligned human behavior. This has important implications for designing and evaluating adaptive, bias-aware LLM-based AI systems in interactive contexts.

标签

LLM 决策偏差 认知偏差 对话智能体 行为模拟

arXiv 分类

cs.AI cs.HC cs.MA