AI Agents 相关度: 6/10

Fast-Slow Efficient Training for Multimodal Large Language Models via Visual Token Pruning

Dingkun Zhang, Shuhan Qi, Yulin Wu, Xinyu Xiao, Xuan Wang, Long Chen
arXiv: 2602.03815v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

提出DualSpeed框架,通过视觉Token剪枝加速多模态大语言模型的训练,并保持推理性能。

主要贡献

  • 提出DualSpeed快速-慢速训练框架
  • 结合视觉Token剪枝(VTP)加速训练
  • 使用自蒸馏保证训练-推理一致性

方法论

DualSpeed采用快速模式(VTP+模式隔离)和慢速模式(完整视觉序列+自蒸馏)交替训练,兼顾效率和性能。

原文摘要

Multimodal Large Language Models (MLLMs) suffer from severe training inefficiency issue, which is associated with their massive model sizes and visual token numbers. Existing efforts in efficient training focus on reducing model sizes or trainable parameters. Inspired by the success of Visual Token Pruning (VTP) in improving inference efficiency, we are exploring another substantial research direction for efficient training by reducing visual tokens. However, applying VTP at the training stage results in a training-inference mismatch: pruning-trained models perform poorly when inferring on non-pruned full visual token sequences. To close this gap, we propose DualSpeed, a fast-slow framework for efficient training of MLLMs. The fast-mode is the primary mode, which incorporates existing VTP methods as plugins to reduce visual tokens, along with a mode isolator to isolate the model's behaviors. The slow-mode is the auxiliary mode, where the model is trained on full visual sequences to retain training-inference consistency. To boost its training, it further leverages self-distillation to learn from the sufficiently trained fast-mode. Together, DualSpeed can achieve both training efficiency and non-degraded performance. Experiments show DualSpeed accelerates the training of LLaVA-1.5 by 2.1$\times$ and LLaVA-NeXT by 4.0$\times$, retaining over 99% performance. Code: https://github.com/dingkun-zhang/DualSpeed

标签

多模态学习 大语言模型 视觉Token剪枝 高效训练 自蒸馏

arXiv 分类

cs.CV cs.LG