Multimodal Learning 相关度: 9/10

Steerable Visual Representations

Jona Ruthardt, Manu Gaur, Deva Ramanan, Makarand Tapaswi, Yuki M. Asano
arXiv: 2604.02327v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

提出可控视觉表征,通过早期融合文本信息到视觉编码器中,实现对图像特征的精细控制。

主要贡献

  • 提出可控视觉表征
  • 提出早期融合文本和视觉信息的框架
  • 提出衡量可控性的benchmark

方法论

通过轻量级交叉注意力将文本信息注入到ViT的中间层,实现视觉表征的语义引导。

原文摘要

Pretrained Vision Transformers (ViTs) such as DINOv2 and MAE provide generic image features that can be applied to a variety of downstream tasks such as retrieval, classification, and segmentation. However, such representations tend to focus on the most salient visual cues in the image, with no way to direct them toward less prominent concepts of interest. In contrast, Multimodal LLMs can be guided with textual prompts, but the resulting representations tend to be language-centric and lose their effectiveness for generic visual tasks. To address this, we introduce Steerable Visual Representations, a new class of visual representations, whose global and local features can be steered with natural language. While most vision-language models (e.g., CLIP) fuse text with visual features after encoding (late fusion), we inject text directly into the layers of the visual encoder (early fusion) via lightweight cross-attention. We introduce benchmarks for measuring representational steerability, and demonstrate that our steerable visual features can focus on any desired objects in an image while preserving the underlying representation quality. Our method also matches or outperforms dedicated approaches on anomaly detection and personalized object discrimination, exhibiting zero-shot generalization to out-of-distribution tasks.

标签

可控视觉表征 视觉语言模型 早期融合

arXiv 分类

cs.CV cs.AI