Robust Safety Monitoring of Language Models via Activation Watermarking
AI 摘要
针对大语言模型安全监控的脆弱性,提出激活水印防御自适应攻击。
主要贡献
- 揭示现有监控方法易受自适应攻击
- 设计基于激活水印的防御机制
- 验证激活水印在对抗自适应攻击时的有效性
方法论
将LLM监控视为安全博弈,通过引入激活水印在推理时引入不确定性,提高攻击难度。
原文摘要
Large language models (LLMs) can be misused to reveal sensitive information, such as weapon-making instructions or writing malware. LLM providers rely on $\emph{monitoring}$ to detect and flag unsafe behavior during inference. An open security challenge is $\emph{adaptive}$ adversaries who craft attacks that simultaneously (i) evade detection while (ii) eliciting unsafe behavior. Adaptive attackers are a major concern as LLM providers cannot patch their security mechanisms, since they are unaware of how their models are being misused. We cast $\emph{robust}$ LLM monitoring as a security game, where adversaries who know about the monitor try to extract sensitive information, while a provider must accurately detect these adversarial queries at low false positive rates. Our work (i) shows that existing LLM monitors are vulnerable to adaptive attackers and (ii) designs improved defenses through $\emph{activation watermarking}$ by carefully introducing uncertainty for the attacker during inference. We find that $\emph{activation watermarking}$ outperforms guard baselines by up to $52\%$ under adaptive attackers who know the monitoring algorithm but not the secret key.