Multimodal Learning 相关度: 9/10

Linking Perception, Confidence and Accuracy in MLLMs

Yuetian Du, Yucheng Wang, Rongyu Zhang, Zhijie Xu, Boyu Yang, Ming Kong, Jie Liu, Qiang Zhu
arXiv: 2603.12149v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

论文研究MLLM的置信度校准问题,提出CDRL和CA-TTS框架,提升模型性能并实现置信度感知。

主要贡献

  • 揭示MLLM的置信度误校准问题
  • 提出Confidence-Driven Reinforcement Learning (CDRL)方法
  • 提出Confidence-Aware Test-Time Scaling (CA-TTS)框架

方法论

提出CDRL以增强感知和校准置信度,并利用CA-TTS在测试时动态协调多个模块,最终通过实验验证有效性。

原文摘要

Recent advances in Multi-modal Large Language Models (MLLMs) have predominantly focused on enhancing visual perception to improve accuracy. However, a critical question remains unexplored: Do models know when they do not know? Through a probing experiment, we reveal a severe confidence miscalibration problem in MLLMs. To address this, we propose Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a novel confidence-based reward to enhance perceptual sensitivity and robustly calibrate the model's confidence. Beyond training benefits, calibrated confidence enables more effective test-time scaling as a free lunch. We further propose Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals. An Expert Model acts in multiple roles (e.g., Planner, Critic, Voter) to schedule these modules and provide external verification. Our integrated framework establishes new state-of-the-art results with consistent 8.8% gains across four benchmarks. More ablation studies demonstrate the effectiveness of each module and scaling superiority.

标签

MLLM 置信度校准 强化学习 多模态学习

arXiv 分类

cs.CV cs.CL