Multimodal Learning 相关度: 9/10

Bridging Efficiency and Transparency: Explainable CoT Compression in Multimodal Large Reasoning Models

Yizhi Wang, Linan Yue, Min-Ling Zhang
arXiv: 2602.09485v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

提出XMCC,一种可解释的多模态CoT压缩器,通过强化学习优化压缩决策,提升推理效率并提供可解释性。

主要贡献

  • 提出XMCC压缩器,优化多模态推理CoT
  • 使用强化学习进行CoT压缩决策
  • 生成自然语言解释压缩原因

方法论

将CoT压缩建模为序列决策过程,利用强化学习进行优化,并生成自然语言解释。

原文摘要

Long chains of thought (Long CoTs) are widely employed in multimodal reasoning models to tackle complex tasks by capturing detailed visual information. However, these Long CoTs are often excessively lengthy and contain redundant reasoning steps, which can hinder inference efficiency. Compressing these long CoTs is a natural solution, yet existing approaches face two major challenges: (1) they may compromise the integrity of visual-textual reasoning by removing essential alignment cues, and (2) the compression process lacks explainability, making it difficult to discern which information is critical. To address these problems, we propose XMCC, an eXplainable Multimodal CoT Compressor that formulates compression as a sequential decision-making process optimized via reinforcement learning. XMCC can effectively shorten reasoning trajectories while preserving key reasoning steps and answer correctness, and simultaneously generates natural-language explanations for its compression decisions. Extensive experiments on representative multimodal reasoning benchmarks demonstrate that XMCC not only reduces reasoning length but also provides explainable explanations, validating its effectiveness.

标签

多模态推理 链式思考 CoT压缩 强化学习 可解释性

arXiv 分类

cs.AI