Multimodal Learning 相关度: 9/10

Chatting with Images for Introspective Visual Thinking

Junfei Wu, Jian Guan, Qiang Liu, Shu Wu, Liang Wang, Wei Wu, Tienie Tan
arXiv: 2602.11073v1 发布: 2026-02-11 更新: 2026-02-11

AI 摘要

ViLaVT通过语言引导的特征调制,增强了LVLM在多图和视频空间推理上的能力。

主要贡献

  • 提出了一种新的框架“chatting with images”,通过语言引导视觉特征调制进行视觉操作
  • 设计了ViLaVT,一个具有动态视觉编码器的LVLM,用于交互式视觉推理
  • 采用两阶段课程学习,结合监督微调和强化学习来促进有效的推理行为

方法论

通过语言提示动态地对多个图像区域进行联合重新编码,实现语言推理和视觉状态更新的紧密耦合。使用ViLaVT模型进行实验。

原文摘要

Current large vision-language models (LVLMs) typically rely on text-only reasoning based on a single-pass visual encoding, which often leads to loss of fine-grained visual information. Recently the proposal of ''thinking with images'' attempts to alleviate this limitation by manipulating images via external tools or code; however, the resulting visual states are often insufficiently grounded in linguistic semantics, impairing effective cross-modal alignment - particularly when visual semantics or geometric relationships must be reasoned over across distant regions or multiple images. To address these challenges, we propose ''chatting with images'', a new framework that reframes visual manipulation as language-guided feature modulation. Under the guidance of expressive language prompts, the model dynamically performs joint re-encoding over multiple image regions, enabling tighter coupling between linguistic reasoning and visual state updates. We instantiate this paradigm in ViLaVT, a novel LVLM equipped with a dynamic vision encoder explicitly designed for such interactive visual reasoning, and trained it with a two-stage curriculum combining supervised fine-tuning and reinforcement learning to promote effective reasoning behaviors. Extensive experiments across eight benchmarks demonstrate that ViLaVT achieves strong and consistent improvements, with particularly pronounced gains on complex multi-image and video-based spatial reasoning tasks.

标签

LVLM 视觉语言模型 多模态学习 视觉推理 语言引导

arXiv 分类

cs.CV cs.AI cs.CL