LLM Reasoning 相关度: 9/10

Failure of contextual invariance in gender inference with large language models

Sagar Kumar, Ariel Flint, Luca Maria Aiello, Andrea Baronchelli
arXiv: 2603.23485v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

LLM在性别推断中违反了上下文不变性,即使在语法结构相似的情况下也会出现偏差。

主要贡献

  • 揭示了LLM在性别推断任务中上下文不变性失效的问题。
  • 发现即使引入极小的上下文信息,LLM输出也会发生显著变化。
  • 表明文化性别刻板印象的影响会随上下文变化而减弱或消失。

方法论

通过可控的代词选择任务,引入极小的、理论上无信息的语境,观察模型输出的系统性变化。

原文摘要

Standard evaluation practices assume that large language model (LLM) outputs are stable under contextually equivalent formulations of a task. Here, we test this assumption in the setting of gender inference. Using a controlled pronoun selection task, we introduce minimal, theoretically uninformative discourse context and find that this induces large, systematic shifts in model outputs. Correlations with cultural gender stereotypes, present in decontextualized settings, weaken or disappear once context is introduced, while theoretically irrelevant features, such as the gender of a pronoun for an unrelated referent, become the most informative predictors of model behaviour. A Contextuality-by-Default analysis reveals that, in 19--52\% of cases across models, this dependence persists after accounting for all marginal effects of context on individual outputs and cannot be attributed to simple pronoun repetition. These findings show that LLM outputs violate contextual invariance even under near-identical syntactic formulations, with implications for bias benchmarking and deployment in high-stakes settings.

标签

LLM Bias Gender Inference Contextual Invariance Evaluation

arXiv 分类

cs.CL cs.AI cs.CY