Label What Matters: Modality-Balanced and Difficulty-Aware Multimodal Active Learning
AI 摘要
提出RL-MBA框架,解决多模态主动学习中模态平衡和难度感知问题,提高模型性能和公平性。
主要贡献
- 提出RL-MBA框架
- 设计自适应模态贡献平衡(AMCB)机制
- 提出基于证据融合的难度感知策略调整(EFDA)
方法论
将样本选择建模为马尔可夫决策过程,通过强化学习动态调整模态权重,并利用证据融合估计样本难度。
原文摘要
Multimodal learning integrates complementary information from different modalities such as image, text, and audio to improve model performance, but its success relies on large-scale labeled data, which is costly to obtain. Active learning (AL) mitigates this challenge by selectively annotating informative samples. In multimodal settings, many approaches implicitly assume that modality importance is stable across rounds and keep selection rules fixed at the fusion stage, which leaves them insensitive to the dynamic nature of multimodal learning, where the relative value of modalities and the difficulty of instances shift as training proceeds. To address this issue, we propose RL-MBA, a reinforcement-learning framework for modality-balanced, difficulty-aware multimodal active learning. RL-MBA models sample selection as a Markov Decision Process, where the policy adapts to modality contributions, uncertainty, and diversity, and the reward encourages accuracy gains and balance. Two key components drive this adaptability: (1) Adaptive Modality Contribution Balancing (AMCB), which dynamically adjusts modality weights via reinforcement feedback, and (2) Evidential Fusion for DifficultyAware Policy Adjustment (EFDA), which estimates sample difficulty via uncertainty-based evidential fusion to prioritize informative samples. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA consistently outperforms strong baselines, improving both classification accuracy and modality fairness under limited labeling budgets.