AI Agents 相关度: 8/10

Analysis Of Linguistic Stereotypes in Single and Multi-Agent Generative AI Architectures

Martina Ullasci, Marco Rondina, Riccardo Coppola, Flavio Giobergia, Riccardo Bellanca, Gabriele Mancari Pasi, Luca Prato, Federico Spinoso, Silvia Tagliente
arXiv: 2603.18729v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

该论文分析了LLM在不同方言输入下的刻板印象生成,并探讨了缓解策略。

主要贡献

  • 复制并分析了LLM中方言敏感的刻板印象生成现象
  • 研究了prompt工程和多智能体架构的缓解效果
  • 提出了针对高影响力LLM部署的工作流控制建议

方法论

使用prompt模板测试LLM在SAE和AAE输入下的刻板印象输出,并使用LLM作为评判员评估偏差。

原文摘要

Many works in the literature show that LLM outputs exhibit discriminatory behaviour, triggering stereotype-based inferences based on the dialect in which the inputs are written. This bias has been shown to be particularly pronounced when the same inputs are provided to LLMs in Standard American English (SAE) and African-American English (AAE). In this paper, we replicate existing analyses of dialect-sensitive stereotype generation in LLM outputs and investigate the effects of mitigation strategies, including prompt engineering (role-based and Chain-Of-Thought prompting) and multi-agent architectures composed of generate-critique-revise models. We define eight prompt templates to analyse different ways in which dialect bias can manifest, such as suggested names, jobs, and adjectives for SAE or AAE speakers. We use an LLM-as-judge approach to evaluate the bias in the results. Our results show that stereotype-bearing differences emerge between SAE- and AAE-related outputs across all template categories, with the strongest effects observed in adjective and job attribution. Baseline disparities vary substantially by model, with the largest SAE-AAE differential observed in Claude Haiku and the smallest in Phi-4 Mini. Chain-Of-Thought prompting proved to be an effective mitigation strategy for Claude Haiku, whereas the use of a multi-agent architecture ensured consistent mitigation across all the models. These findings suggest that for intersectionality-informed software engineering, fairness evaluation should include model-specific validation of mitigation strategies, and workflow-level controls (e.g., agentic architectures involving critique models) in high-impact LLM deployments. The current results are exploratory in nature and limited in scope, but can lead to extensions and replications by increasing the dataset size and applying the procedure to different languages or dialects.

标签

LLM 偏见 刻板印象 方言 缓解策略

arXiv 分类

cs.AI