SpectralGCD: Spectral Concept Selection and Cross-modal Representation Learning for Generalized Category Discovery
AI 摘要
SpectralGCD利用CLIP跨模态相似性,通过谱滤波和知识蒸馏实现高效广义类别发现。
主要贡献
- 提出SpectralGCD框架,利用跨模态图像-概念相似性
- 引入谱滤波,自动保留相关概念
- 通过正向和反向知识蒸馏,保持学生模型的语义充分性和对齐性
方法论
构建基于CLIP的跨模态表示,使用谱滤波选择概念,并通过知识蒸馏训练学生模型。
原文摘要
Generalized Category Discovery (GCD) aims to identify novel categories in unlabeled data while leveraging a small labeled subset of known classes. Training a parametric classifier solely on image features often leads to overfitting to old classes, and recent multimodal approaches improve performance by incorporating textual information. However, they treat modalities independently and incur high computational cost. We propose SpectralGCD, an efficient and effective multimodal approach to GCD that uses CLIP cross-modal image-concept similarities as a unified cross-modal representation. Each image is expressed as a mixture over semantic concepts from a large task-agnostic dictionary, which anchors learning to explicit semantics and reduces reliance on spurious visual cues. To maintain the semantic quality of representations learned by an efficient student, we introduce Spectral Filtering which exploits a cross-modal covariance matrix over the softmaxed similarities measured by a strong teacher model to automatically retain only relevant concepts from the dictionary. Forward and reverse knowledge distillation from the same teacher ensures that the cross-modal representations of the student remain both semantically sufficient and well-aligned. Across six benchmarks, SpectralGCD delivers accuracy comparable to or significantly superior to state-of-the-art methods at a fraction of the computational cost. The code is publicly available at: https://github.com/miccunifi/SpectralGCD.