ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning
AI 摘要
ResAdapt通过自适应分辨率分配,提升了多模态大模型在低视觉预算下的推理效率。
主要贡献
- 提出ResAdapt框架,实现输入侧的自适应分辨率分配
- 使用Cost-Aware Policy Optimization (CAPO)训练分配器
- 在多种视觉推理任务上验证了ResAdapt的有效性
方法论
ResAdapt框架包含一个分配器,该分配器基于上下文bandit算法,学习为每帧分配适当的视觉预算,从而提升推理效率。
原文摘要
Multimodal Large Language Models (MLLMs) achieve stronger visual understanding by scaling input fidelity, yet the resulting visual token growth makes jointly sustaining high spatial resolution and long temporal context prohibitive. We argue that the bottleneck lies not in how post-encoding representations are compressed but in the volume of pixels the encoder receives, and address it with ResAdapt, an Input-side adaptation framework that learns how much visual budget each frame should receive before encoding. ResAdapt couples a lightweight Allocator with an unchanged MLLM backbone, so the backbone retains its native visual-token interface while receiving an operator-transformed input. We formulate allocation as a contextual bandit and train the Allocator with Cost-Aware Policy Optimization (CAPO), which converts sparse rollout feedback into a stable accuracy-cost learning signal. Across budget-controlled video QA, temporal grounding, and image reasoning tasks, ResAdapt improves low-budget operating points and often lies on or near the efficiency-accuracy frontier, with the clearest gains on reasoning-intensive benchmarks under aggressive compression. Notably, ResAdapt supports up to 16x more frames at the same visual budget while delivering over 15% performance gain. Code is available at https://github.com/Xnhyacinth/ResAdapt.