Multimodal Learning 相关度: 9/10

Imagination Helps Visual Reasoning, But Not Yet in Latent Space

You Li, Chi Chen, Yanghao Li, Fanhu Zeng, Kaiyu Huang, Jinan Xu, Maosong Sun
arXiv: 2602.22766v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

论文揭示了现有多模态大语言模型中隐空间推理的无效性,并提出显式文本想象方法CapImagine。

主要贡献

  • 揭示了隐空间推理中输入与隐状态、隐状态与答案之间的断连
  • 提出了基于显式文本想象的视觉推理方法CapImagine
  • 实验证明CapImagine优于隐空间推理方法

方法论

使用因果中介分析验证隐空间推理的有效性,并通过探针分析研究隐状态的信息编码。

原文摘要

Latent visual reasoning aims to mimic human's imagination process by meditating through hidden states of Multimodal Large Language Models. While recognized as a promising paradigm for visual reasoning, the underlying mechanisms driving its effectiveness remain unclear. Motivated to demystify the true source of its efficacy, we investigate the validity of latent reasoning using Causal Mediation Analysis. We model the process as a causal chain: the input as the treatment, the latent tokens as the mediator, and the final answer as the outcome. Our findings uncover two critical disconnections: (a) Input-Latent Disconnect: dramatic perturbations on the input result in negligible changes to the latent tokens, suggesting that latent tokens do not effectively attend to the input sequence. (b) Latent-Answer Disconnect: perturbations on the latent tokens yield minimal impact on the final answer, indicating the limited causal effect latent tokens imposing on the outcome. Furthermore, extensive probing analysis reveals that latent tokens encode limited visual information and exhibit high similarity. Consequently, we challenge the necessity of latent reasoning and propose a straightforward alternative named CapImagine, which teaches the model to explicitly imagine using text. Experiments on vision-centric benchmarks show that CapImagine significantly outperforms complex latent-space baselines, highlighting the superior potential of visual reasoning through explicit imagination.

标签

多模态学习 视觉推理 大语言模型 因果分析

arXiv 分类

cs.CL