StructXLIP: Enhancing Vision-language Models with Multimodal Structural Cues
AI 摘要
StructXLIP通过提取图像结构信息,增强视觉语言模型的跨模态对齐,提升检索性能。
主要贡献
- 提出StructXLIP框架,利用图像边缘信息增强VLM
- 引入结构中心损失,优化图像与文本结构表示的对齐
- 实验证明StructXLIP在跨模态检索任务上的有效性
方法论
提取图像边缘特征,过滤文本以强调结构信息,通过结构中心损失函数对齐视觉和文本的结构化表示,提升跨模态检索性能。
原文摘要
Edge-based representations are fundamental cues for visual understanding, a principle rooted in early vision research and still central today. We extend this principle to vision-language alignment, showing that isolating and aligning structural cues across modalities can greatly benefit fine-tuning on long, detail-rich captions, with a specific focus on improving cross-modal retrieval. We introduce StructXLIP, a fine-tuning alignment paradigm that extracts edge maps (e.g., Canny), treating them as proxies for the visual structure of an image, and filters the corresponding captions to emphasize structural cues, making them "structure-centric". Fine-tuning augments the standard alignment loss with three structure-centric losses: (i) aligning edge maps with structural text, (ii) matching local edge regions to textual chunks, and (iii) connecting edge maps to color images to prevent representation drift. From a theoretical standpoint, while standard CLIP maximizes the mutual information between visual and textual embeddings, StructXLIP additionally maximizes the mutual information between multimodal structural representations. This auxiliary optimization is intrinsically harder, guiding the model toward more robust and semantically stable minima, enhancing vision-language alignment. Beyond outperforming current competitors on cross-modal retrieval in both general and specialized domains, our method serves as a general boosting recipe that can be integrated into future approaches in a plug-and-play manner. Code and pretrained models are publicly available at: https://github.com/intelligolabs/StructXLIP.