BackdoorIDS: Zero-shot Backdoor Detection for Pretrained Vision Encoder
AI 摘要
BackdoorIDS是一种零样本的视觉编码器后门检测方法,基于注意力的劫持和恢复现象。
主要贡献
- 提出了一种零样本后门检测方法BackdoorIDS
- 利用输入掩码过程中注意力变化检测后门
- 在多种数据集和模型上验证了方法的有效性
方法论
通过逐步掩码输入图像,观察图像嵌入的改变。如果嵌入序列形成多个聚类,则判定为后门样本。
原文摘要
Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.