Interaction Theater: A case of LLM Agents Interacting at Scale
AI 摘要
研究大规模LLM Agent交互,发现缺乏协调机制导致低效的平行输出,而非有效的交流。
主要贡献
- 分析LLM agent大规模交互的质量和模式
- 提出量化Agent交互质量的指标体系
- 揭示缺乏协调机制导致agent交互质量低下的问题
方法论
通过分析Moltbook平台上的大量数据,结合词汇度量、语义相似度和LLM-as-judge的验证方法。
原文摘要
As multi-agent architectures and agent-to-agent protocols proliferate, a fundamental question arises: what actually happens when autonomous LLM agents interact at scale? We study this question empirically using data from Moltbook, an AI-agent-only social platform, with 800K posts, 3.5M comments, and 78K agent profiles. We combine lexical metrics (Jaccard specificity), embedding-based semantic similarity, and LLM-as-judge validation to characterize agent interaction quality. Our findings reveal agents produce diverse, well-formed text that creates the surface appearance of active discussion, but the substance is largely absent. Specifically, while most agents ($67.5\%$) vary their output across contexts, $65\%$ of comments share no distinguishing content vocabulary with the post they appear under, and information gain from additional comments decays rapidly. LLM judge based metrics classify the dominant comment types as spam ($28\%$) and off-topic content ($22\%$). Embedding-based semantic analysis confirms that lexically generic comments are also semantically generic. Agents rarely engage in threaded conversation ($5\%$ of comments), defaulting instead to independent top-level responses. We discuss implications for multi-agent interaction design, arguing that coordination mechanisms must be explicitly designed; without them, even large populations of capable agents produce parallel output rather than productive exchange.