Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness
AI 摘要
提出一种概念引导微调方法,通过对齐模型内部相关性与概念掩码,提升ViT模型的鲁棒性。
主要贡献
- 提出概念引导微调框架,增强ViT鲁棒性
- 利用LLM和VLM自动生成概念掩码,无需人工标注
- 证明概念引导掩码比传统分割掩码更有效
方法论
利用LLM和VLM生成概念掩码,通过对齐模型内部相关性与掩码,抑制背景干扰,微调ViT模型。
原文摘要
Vision Transformers (ViTs) often degrade under distribution shifts because they rely on spurious correlations, such as background cues, rather than semantically meaningful features. Existing regularization methods, typically relying on simple foreground-background masks, which fail to capture the fine-grained semantic concepts that define an object (e.g., ``long beak'' and ``wings'' for a ``bird''). As a result, these methods provide limited robustness to distribution shifts. To address this limitation, we introduce a novel finetuning framework that steers model reasoning toward concept-level semantics. Our approach optimizes the model's internal relevance maps to align with spatially grounded concept masks. These masks are generated automatically, without manual annotation: class-relevant concepts are first proposed using an LLM-based, label-free method, and then segmented using a VLM. The finetuning objective aligns relevance with these concept regions while simultaneously suppressing focus on spurious background areas. Notably, this process requires only a minimal set of images and uses half of the dataset classes. Extensive experiments on five out-of-distribution benchmarks demonstrate that our method improves robustness across multiple ViT-based models. Furthermore, we show that the resulting relevance maps exhibit stronger alignment with semantic object parts, offering a scalable path toward more robust and interpretable vision models. Finally, we confirm that concept-guided masks provide more effective supervision for model robustness than conventional segmentation maps, supporting our central hypothesis.