MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities
AI 摘要
提出了 MissBench,用于评估多模态情感分析模型在不平衡缺失模态下的性能,并提供评估指标MEI和MLI。
主要贡献
- 提出了MissBench基准测试框架
- 定义了Modality Equity Index (MEI)和Modality Learning Index (MLI)两个诊断指标
- 揭示了现有模型在不平衡缺失模态下的局限性
方法论
构建了共享和不平衡缺失率协议,在多个情感数据集上评估现有模型,并使用MEI和MLI量化模态贡献和优化平衡。
原文摘要
Multimodal affective computing underpins key tasks such as sentiment analysis and emotion recognition. Standard evaluations, however, often assume that textual, acoustic, and visual modalities are equally available. In real applications, some modalities are systematically more fragile or expensive, creating imbalanced missing rates and training biases that task-level metrics alone do not reveal. We introduce MissBench, a benchmark and framework for multimodal affective tasks that standardizes both shared and imbalanced missing-rate protocols on four widely used sentiment and emotion datasets. MissBench also defines two diagnostic metrics. The Modality Equity Index (MEI) measures how fairly different modalities contribute across missing-modality configurations. The Modality Learning Index (MLI) quantifies optimization imbalance by comparing modality-specific gradient norms during training, aggregated across modality-related modules. Experiments on representative method families show that models that appear robust under shared missing rates can still exhibit marked modality inequity and optimization imbalance under imbalanced conditions. These findings position MissBench, together with MEI and MLI, as practical tools for stress-testing and analyzing multimodal affective models in realistic incomplete-modality settings.For reproducibility, we release our code at: https://anonymous.4open.science/r/MissBench-4098/