Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models
AI 摘要
Photon通过自适应token压缩加速3D医学影像多模态大语言模型在视觉问答中的应用。
主要贡献
- 提出instruction-conditioned token scheduling和surrogate gradient propagation自适应压缩token
- 引入带梯度恢复的自定义反向传播规则,优化离散token丢弃
- 设计正则化目标缓解语言偏见,提高模型可靠性
方法论
通过token调度和梯度传播自适应压缩3D医学影像的token序列,并引入正则化提高可靠性。
原文摘要
Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.