AI Agents 相关度: 7/10

VibeGuard: A Security Gate Framework for AI-Generated Code

Ying Xie
arXiv: 2604.01052v1 发布: 2026-04-01 更新: 2026-04-01

AI 摘要

VibeGuard是一种AI生成代码的安全门,旨在解决现有工具的盲点,提高代码安全性。

主要贡献

  • 提出了针对AI生成代码盲点的安全检测工具VibeGuard
  • 针对artifact hygiene, packaging-configuration drift, source-map exposure, hardcoded secrets, and supply-chain risk五个盲点进行检测
  • 实验结果表明VibeGuard具有较高的召回率和准确率

方法论

通过定义安全策略,VibeGuard在代码发布前进行检查,识别并阻止潜在的安全漏洞,实验评估其性能。

原文摘要

"Vibe coding," in which developers delegate code generation to AI assistants and accept the output with little manual review, has gained rapid adoption in production settings. On March 31, 2026, Anthropic's Claude Code CLI shipped a 59.8 MB source map file in its npm package, exposing roughly 512,000 lines of proprietary TypeScript. The tool had itself been largely vibe-coded, and the leak traced to a misconfigured packaging rule rather than a logic bug. Existing static-analysis and secret-scanning tools did not cover this failure mode, pointing to a gap between the vulnerabilities AI tends to introduce and the vulnerabilities current tooling is built to find. We present VibeGuard, a pre-publish security gate that targets five such blind spots: artifact hygiene, packaging-configuration drift, source-map exposure, hardcoded secrets, and supply-chain risk. In controlled experiments on eight synthetic projects (seven vulnerable, one clean control), VibeGuard achieved 100% recall, 89.47% precision (F1 = 94.44%), and correct pass/fail gate decisions on all eight projects across three policy levels. We discuss how these results inform a defense-in-depth workflow for teams that rely on AI code generation.

标签

AI生成代码安全 代码漏洞检测 软件供应链安全

arXiv 分类

cs.CR cs.AI