Multimodal Learning 相关度: 9/10

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

Quentin Guimard, Federico Bartsch, Simone Caldarella, Rahaf Aljundi, Elisa Ricci, Massimiliano Mancini
arXiv: 2603.19028v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

SEM通过稀疏自编码器分解CLIP嵌入,实现对视觉-语言模型偏差的后处理校正。

主要贡献

  • 提出Sparse Embedding Modulation (SEM)框架
  • 利用稀疏表示实现更精确的偏差干预
  • 在多个基准数据集上验证了SEM的有效性

方法论

使用稀疏自编码器分解CLIP文本嵌入,识别并调节与偏差相关的神经元,同时保留与查询相关的神经元。

原文摘要

Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.

标签

视觉-语言模型 偏差校正 稀疏表示 后处理

arXiv 分类

cs.CV cs.AI cs.LG