Do VLMs Need Vision Transformers? Evaluating State Space Models as Vision Encoders
AI 摘要
研究了状态空间模型(SSM)作为视觉编码器在视觉语言模型(VLM)中的表现,发现其具有竞争力。
主要贡献
- 评估了SSM作为VLM视觉骨干网络的性能
- 提出了提高视觉骨干网络稳定性的策略
- 发现SSM在VLM中是transformer的有力替代品
方法论
在ImageNet-1K上初始化SSM和ViT,并在VQA和grounding/localization任务上进行评估,通过detection/segmentation进行调优。
原文摘要
Large vision--language models (VLMs) often use a frozen vision backbone, whose image features are mapped into a large language model through a lightweight connector. While transformer-based encoders are the standard visual backbone, we ask whether state space model (SSM) vision backbones can be a strong alternative. We systematically evaluate SSM vision backbones for VLMs in a controlled setting. Under matched ImageNet-1K initialization, the SSM backbone achieves the strongest overall performance across both VQA and grounding/localization. We further adapt both SSM and ViT-family backbones with detection or segmentation training and find that dense-task tuning generally improves performance across families; after this adaptation, the SSM backbone remains competitive while operating at a substantially smaller model scale. We further observe that (i) higher ImageNet accuracy or larger backbones do not reliably translate into better VLM performance, and (ii) some visual backbones are unstable in localization. Based on these findings, we propose stabilization strategies that improve robustness for both backbone families and highlight SSM backbones as a strong alternative to transformer-based vision encoders in VLMs.