LLM Reasoning 相关度: 8/10

Exposing the Systematic Vulnerability of Open-Weight Models to Prefill Attacks

Lukas Struppek, Adam Gleave, Kellin Pelrine
arXiv: 2602.14689v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

论文揭示了开放权重语言模型中预填充攻击的系统性漏洞,并进行了大规模实证研究。

主要贡献

  • 首次系统性研究预填充攻击对开放权重模型的影响
  • 评估了多种预填充攻击策略的有效性
  • 揭示了当前主流开放权重模型对预填充攻击的普遍脆弱性

方法论

对20多种现有和新的预填充攻击策略,在多个模型家族和先进的开放权重模型上进行了大规模评估。

原文摘要

As the capabilities of large language models continue to advance, so does their potential for misuse. While closed-source models typically rely on external defenses, open-weight models must primarily depend on internal safeguards to mitigate harmful behavior. Prior red-teaming research has largely focused on input-based jailbreaking and parameter-level manipulations. However, open-weight models also natively support prefilling, which allows an attacker to predefine initial response tokens before generation begins. Despite its potential, this attack vector has received little systematic attention. We present the largest empirical study to date of prefill attacks, evaluating over 20 existing and novel strategies across multiple model families and state-of-the-art open-weight models. Our results show that prefill attacks are consistently effective against all major contemporary open-weight models, revealing a critical and previously underexplored vulnerability with significant implications for deployment. While certain large reasoning models exhibit some robustness against generic prefilling, they remain vulnerable to tailored, model-specific strategies. Our findings underscore the urgent need for model developers to prioritize defenses against prefill attacks in open-weight LLMs.

标签

LLM 安全 攻击 预填充 开放权重模型

arXiv 分类

cs.CR cs.AI cs.CL cs.LG