CrystaL: Spontaneous Emergence of Visual Latents in MLLMs
AI 摘要
CrystaL通过对齐完整和损坏图像的潜在表示,提升多模态大语言模型视觉理解能力。
主要贡献
- 提出CrystaL框架,无需额外标注即可提升视觉信息保留
- 通过对齐注意力模式和预测分布,提炼任务相关的视觉语义
- 在感知密集型基准测试上,显著优于现有方法
方法论
使用双路径框架,分别处理完整和损坏的图像,通过注意力模式和预测分布对齐,提炼潜在表示。
原文摘要
Multimodal Large Language Models (MLLMs) have achieved remarkable performance by integrating powerful language backbones with large-scale visual encoders. Among these, latent Chain-of-Thought (CoT) methods enable implicit reasoning in continuous hidden states, facilitating seamless vision-language integration and faster inference. However, existing heuristically predefined supervision signals in latent CoT provide limited guidance for preserving critical visual information in intermediate latent states. To address this limitation, we propose CrystaL (Crystallized Latent Reasoning), a single-stage framework with two paths to process intact and corrupted images, respectively. By explicitly aligning the attention patterns and prediction distributions across the two paths, CrystaL crystallizes latent representations into task-relevant visual semantics, without relying on auxiliary annotations or external modules. Extensive experiments on perception-intensive benchmarks demonstrate that CrystaL consistently outperforms state-of-the-art baselines, achieving substantial gains in fine-grained visual understanding while maintaining robust reasoning capabilities.