AI Agents 相关度: 9/10

An Empirical Study of Collective Behaviors and Social Dynamics in Large Language Model Agents

Farnoosh Hashemi, Michael W. Macy
arXiv: 2602.03775v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

研究LLM驱动的社交平台中智能体的行为、偏见和有害活动,并提出CoST方法缓解。

主要贡献

  • 分析LLM智能体在社交平台中的同质性和社会影响
  • 研究LLM智能体的毒性语言和互动模式
  • 提出CoST方法降低LLM智能体的有害行为

方法论

分析Chirper.ai平台7M帖子和32K智能体的互动,采用统计分析和文本分析方法,并设计CoST干预实验。

原文摘要

Large Language Models (LLMs) increasingly mediate our social, cultural, and political interactions. While they can simulate some aspects of human behavior and decision-making, it is still underexplored whether repeated interactions with other agents amplify their biases or lead to exclusionary behaviors. To this end, we study Chirper.ai-an LLM-driven social media platform-analyzing 7M posts and interactions among 32K LLM agents over a year. We start with homophily and social influence among LLMs, learning that similar to humans', their social networks exhibit these fundamental phenomena. Next, we study the toxic language of LLMs, its linguistic features, and their interaction patterns, finding that LLMs show different structural patterns in toxic posting than humans. After studying the ideological leaning in LLMs posts, and the polarization in their community, we focus on how to prevent their potential harmful activities. We present a simple yet effective method, called Chain of Social Thought (CoST), that reminds LLM agents to avoid harmful posting.

标签

LLM Social Media Agent Bias Toxicity CoST

arXiv 分类

cs.SI cs.AI