Multimodal Learning 相关度: 9/10

Explaining CLIP Zero-shot Predictions Through Concepts

Onat Ozdemir, Anders Christensen, Stephan Alaniz, Zeynep Akata, Emre Akbas
arXiv: 2603.28211v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

EZPC通过将CLIP的预测与人类可理解的概念对齐,实现了零样本图像识别的可解释性。

主要贡献

  • 提出了EZPC模型,连接了CLIP和概念瓶颈模型。
  • 通过对齐和重构目标学习概念空间映射。
  • 在多个数据集上验证了模型的可解释性和准确性。

方法论

EZPC将CLIP的图文嵌入投影到从语言描述中学习到的概念空间,通过对齐和重构目标,确保概念激活保持CLIP的语义结构。

原文摘要

Large-scale vision-language models such as CLIP have achieved remarkable success in zero-shot image recognition, yet their predictions remain largely opaque to human understanding. In contrast, Concept Bottleneck Models provide interpretable intermediate representations by reasoning through human-defined concepts, but they rely on concept supervision and lack the ability to generalize to unseen classes. We introduce EZPC that bridges these two paradigms by explaining CLIP's zero-shot predictions through human-understandable concepts. Our method projects CLIP's joint image-text embeddings into a concept space learned from language descriptions, enabling faithful and transparent explanations without additional supervision. The model learns this projection via a combination of alignment and reconstruction objectives, ensuring that concept activations preserve CLIP's semantic structure while remaining interpretable. Extensive experiments on five benchmark datasets, CIFAR-100, CUB-200-2011, Places365, ImageNet-100, and ImageNet-1k, demonstrate that our approach maintains CLIP's strong zero-shot classification accuracy while providing meaningful concept-level explanations. By grounding open-vocabulary predictions in explicit semantic concepts, our method offers a principled step toward interpretable and trustworthy vision-language models. Code is available at https://github.com/oonat/ezpc.

标签

CLIP Zero-shot Learning Interpretability Concept Bottleneck Model Vision-Language Model

arXiv 分类

cs.CV