LLM Memory & RAG 相关度: 8/10

SafeNeuron: Neuron-Level Safety Alignment for Large Language Models

Zhaoxin Wang, Jiaming Liang, Fengbin Zhu, Weixiang Zhao, Junfeng Fang, Jiayi Ji, Handing Wang, Tat-Seng Chua
arXiv: 2602.12158v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

SafeNeuron提出了一种神经元级别的安全对齐框架,增强LLM的安全性与鲁棒性。

主要贡献

  • 提出了SafeNeuron框架,提升LLM应对神经元剪枝攻击的鲁棒性
  • 降低了开源模型被用于红队攻击的风险
  • 验证了安全行为由稳定且共享的内部表示控制

方法论

SafeNeuron首先识别安全相关神经元,然后在偏好优化期间冻结这些神经元,迫使模型构建冗余的安全表示。

原文摘要

Large language models (LLMs) and multimodal LLMs are typically safety-aligned before release to prevent harmful content generation. However, recent studies show that safety behaviors are concentrated in a small subset of parameters, making alignment brittle and easily bypassed through neuron-level attacks. Moreover, most existing alignment methods operate at the behavioral level, offering limited control over the model's internal safety mechanisms. In this work, we propose SafeNeuron, a neuron-level safety alignment framework that improves robustness by redistributing safety representations across the network. SafeNeuron first identifies safety-related neurons, then freezes these neurons during preference optimization to prevent reliance on sparse safety pathways and force the model to construct redundant safety representations. Extensive experiments across models and modalities demonstrate that SafeNeuron significantly improves robustness against neuron pruning attacks, reduces the risk of open-source models being repurposed as red-team generators, and preserves general capabilities. Furthermore, our layer-wise analysis reveals that safety behaviors are governed by stable and shared internal representations. Overall, SafeNeuron provides an interpretable and robust perspective for model alignment.

标签

LLM Safety Alignment Neuron-level Control Robustness

arXiv 分类

cs.LG