AI Agents 相关度: 8/10

Measuring and Exploiting Confirmation Bias in LLM-Assisted Security Code Review

Dimitris Mitropoulos, Nikolaos Alexopoulos, Georgios Alexopoulos, Diomidis Spinellis
arXiv: 2603.18740v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

研究确认偏差对LLM代码审查的影响,揭示其安全漏洞并提出缓解策略。

主要贡献

  • 量化了确认偏差对LLM漏洞检测的影响
  • 揭示了对抗性框架可以利用确认偏差攻击LLM代码审查
  • 提出了通过元数据编辑和显式指令来减轻偏差的方法

方法论

通过控制实验和模拟对抗性pull requests,评估LLM在不同提示框架下的漏洞检测性能。

原文摘要

Security code reviews increasingly rely on systems integrating Large Language Models (LLMs), ranging from interactive assistants to autonomous agents in CI/CD pipelines. We study whether confirmation bias (i.e., the tendency to favor interpretations that align with prior expectations) affects LLM-based vulnerability detection, and whether this failure mode can be exploited in software supply-chain attacks. We conduct two complementary studies. Study 1 quantifies confirmation bias through controlled experiments on 250 CVE vulnerability/patch pairs evaluated across four state-of-the-art models under five framing conditions for the review prompt. Framing a change as bug-free reduces vulnerability detection rates by 16-93%, with strongly asymmetric effects: false negatives increase sharply while false positive rates change little. Bias effects vary by vulnerability type, with injection flaws being more susceptible to them than memory corruption bugs. Study 2 evaluates exploitability in practice mimicking adversarial pull requests that reintroduce known vulnerabilities while framed as security improvements or urgent functionality fixes via their pull request metadata. Adversarial framing succeeds in 35% of cases against GitHub Copilot (interactive assistant) under one-shot attacks and in 88% of cases against Claude Code (autonomous agent) in real project configurations where adversaries can iteratively refine their framing to increase attack success. Debiasing via metadata redaction and explicit instructions restores detection in all interactive cases and 94% of autonomous cases. Our results show that confirmation bias poses a weakness in LLM-based code review, with implications on how AI-assisted development tools are deployed.

标签

LLM 代码审查 安全漏洞 确认偏差 软件供应链安全

arXiv 分类

cs.SE cs.AI cs.CR