Multimodal Learning 相关度: 9/10

MSRL: Scaling Generative Multimodal Reward Modeling via Multi-Stage Reinforcement Learning

Chenglong Wang, Yifu Huo, Yang Gan, Qiaozhi He, Qi Meng, Bei Li, Yan Wang, Junfu Liu, Tianhua Zhou, Jingbo Zhu, Tong Xiao
arXiv: 2603.25108v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

MSRL通过多阶段强化学习提升多模态奖励模型,解决标注数据不足问题,性能显著提升。

主要贡献

  • 提出了一种多阶段强化学习(MSRL)方法,用于扩展多模态奖励模型(MRM)的训练。
  • 设计了跨模态知识蒸馏方法,以提高MSRL中的偏好泛化能力。
  • 实验证明MSRL无需额外多模态标注即可显著提升MRM在视觉理解和生成任务中的性能。

方法论

利用大规模文本偏好数据学习奖励推理能力,逐步迁移到基于caption和完全多模态的强化学习阶段。

原文摘要

Recent advances in multimodal reward modeling have been largely driven by a paradigm shift from discriminative to generative approaches. Building on this progress, recent studies have further employed reinforcement learning from verifiable rewards (RLVR) to enhance multimodal reward models (MRMs). Despite their success, RLVR-based training typically relies on labeled multimodal preference data, which are costly and labor-intensive to obtain, making it difficult to scale MRM training. To overcome this limitation, we propose a Multi-Stage Reinforcement Learning (MSRL) approach, which can achieve scalable RL for MRMs with limited multimodal data. MSRL replaces the conventional RLVR-based training paradigm by first learning a generalizable reward reasoning capability from large-scale textual preference data, and then progressively transferring this capability to multimodal tasks through caption-based and fully multimodal reinforcement-learning stages. Furthermore, we introduce a cross-modal knowledge distillation approach to improve preference generalization within MSRL. Extensive experiments demonstrate that MSRL effectively scales the RLVR-based training of generative MRMs and substantially improves their performance across both visual understanding and visual generation tasks (e.g., from 66.6% to 75.9% on VL-RewardBench and from 70.2% to 75.7% on GenAI-Bench), without requiring additional multimodal preference annotations. Our code is available at: https://github.com/wangclnlp/MSRL.

标签

多模态学习 强化学习 奖励建模 视觉理解 视觉生成

arXiv 分类

cs.CV