LLM Reasoning 相关度: 8/10

Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

Eun Cheol Choi, Lindsay E. Young, Emilio Ferrara
arXiv: 2602.04674v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

LLM模拟人类对虚假信息的易感性时,高估了态度影响,忽略了社交网络的作用。

主要贡献

  • 揭示了LLM在模拟虚假信息易感性时存在的偏差。
  • 评估了LLM在重现人类虚假信息信念和分享模式方面的能力。
  • 分析了LLM的推理过程和训练数据,解释偏差产生的原因。

方法论

利用社交调查数据,构建参与者画像,用LLM模拟受访者,对比LLM输出与人类数据,分析特征关联性。

原文摘要

Large language models (LLMs) are increasingly used as proxies for human judgment in computational social science, yet their ability to reproduce patterns of susceptibility to misinformation remains unclear. We test whether LLM-simulated survey respondents, prompted with participant profiles drawn from social survey data measuring network, demographic, attitudinal and behavioral features, can reproduce human patterns of misinformation belief and sharing. Using three online surveys as baselines, we evaluate whether LLM outputs match observed response distributions and recover feature-outcome associations present in the original survey data. LLM-generated responses capture broad distributional tendencies and show modest correlation with human responses, but consistently overstate the association between belief and sharing. Linear models fit to simulated responses exhibit substantially higher explained variance and place disproportionate weight on attitudinal and behavioral features, while largely ignoring personal network characteristics, relative to models fit to human responses. Analyses of model-generated reasoning and LLM training data suggest that these distortions reflect systematic biases in how misinformation-related concepts are represented. Our findings suggest that LLM-based survey simulations are better suited for diagnosing systematic divergences from human judgment than for substituting it.

标签

LLM 虚假信息 偏差 社会科学 模拟

arXiv 分类

cs.SI cs.AI cs.CL