AdapTS: Lightweight Teacher-Student Approach for Multi-Class and Continual Visual Anomaly Detection
AI 摘要
AdapTS是一种轻量级的教师-学生框架,用于多类和持续视觉异常检测。
主要贡献
- 提出了AdapTS框架,适用于多类和持续视觉异常检测。
- 使用轻量级adapter注入,减少了内存占用。
- 基于原型任务识别机制,动态选择适配器。
方法论
使用共享冻结骨干网络,并注入轻量级可训练的适配器,结合分割引导目标和Perlin噪声进行训练。
原文摘要
Visual Anomaly Detection (VAD) is crucial for industrial inspection, yet most existing methods are limited to single-category scenarios, failing to address the multi-class and continual learning demands of real-world environments. While Teacher-Student (TS) architectures are efficient, they remain unexplored for the Continual Setting. To bridge this gap, we propose AdapTS, a unified TS framework designed for multi-class and continual settings, optimized for edge deployment. AdapTS eliminates the need for two different architectures by utilizing a single shared frozen backbone and injecting lightweight trainable adapters into the student pathway. Training is enhanced via a segmentation-guided objective and synthetic Perlin noise, while a prototype-based task identification mechanism dynamically selects adapters at inference with 99\% accuracy. Experiments on MVTec AD and VisA demonstrate that AdapTS matches the performance of existing TS methods across multi-class and continual learning scenarios, while drastically reducing memory overhead. Our lightest variant, AdapTS-S, requires only 8 MB of additional memory, 13x less than STFPM (95 MB), 48x less than RD4AD (360 MB), and 149x less than DeSTSeg (1120 MB), making it a highly scalable solution for edge deployment in complex industrial environments.