LLM Reasoning 相关度: 9/10

ABCD: All Biases Come Disguised

Mateusz Nowak, Xavier Cadet, Peter Chin
arXiv: 2602.17445v1 发布: 2026-02-19 更新: 2026-02-19

AI 摘要

该论文提出了一种降低LLM在多项选择题基准测试中偏见的评估方法,提高了模型的鲁棒性。

主要贡献

  • 发现LLM在多项选择题中存在标签位置、少样本提示等偏见。
  • 提出了一种简单的去偏评估协议,使用统一的、无序的标签。
  • 证明了该协议可以提高模型在答案排列上的鲁棒性,同时对性能影响很小。

方法论

通过合成NonsenseQA基准测试,观察不同LLM的偏见,并使用句子相似度模型进行评估,验证去偏协议的有效性。

原文摘要

Multiple-choice question (MCQ) benchmarks have been a standard evaluation practice for measuring LLMs' ability to reason and answer knowledge-based questions. Through a synthetic NonsenseQA benchmark, we observe that different LLMs exhibit varying degrees of label-position-few-shot-prompt bias, where the model either uses the answer position, the label in front of the answer, the distributions of correct answers present in the few-shot prompt, or a combination of all to answer each MCQ question. We propose a simple bias-reduced evaluation protocol that replaces the labels of each question with uniform, unordered labels and prompts the LLM to use the whole answer presented. With a simple sentence similarity model, we demonstrate improved robustness and lower standard deviation between different permutations of answers with a minimal drop in LLM's performance, exposing the LLM's capabilities under reduced evaluation artifacts, without any help from the prompt examples or the option labels. Across multiple benchmarks and models, this protocol substantially improves the robustness to answer permutations, reducing mean accuracy variance $3\times$ with only a minimal decrease in the mean model's performance. Through ablation studies on various embedding models and similarity functions, we show that the method is more robust than the standard ones.

标签

LLM Bias Evaluation Robustness

arXiv 分类

cs.CL cs.LG