Towards Multimodal Domain Generalization with Few Labels
AI 摘要
提出了一个半监督多模态域泛化框架,解决了数据量少和域偏移的问题。
主要贡献
- 提出了半监督多模态域泛化(SSMDG)问题
- 提出了包含三个关键组件的统一框架
- 建立了SSMDG基准,并在标准和缺失模态场景下验证了方法的有效性
方法论
利用共识驱动的一致性正则化生成伪标签,通过差异感知正则化利用非共识样本,并进行跨模态原型对齐。
原文摘要
Multimodal models ideally should generalize to unseen domains while remaining data-efficient to reduce annotation costs. To this end, we introduce and study a new problem, Semi-Supervised Multimodal Domain Generalization (SSMDG), which aims to learn robust multimodal models from multi-source data with few labeled samples. We observe that existing approaches fail to address this setting effectively: multimodal domain generalization methods cannot exploit unlabeled data, semi-supervised multimodal learning methods ignore domain shifts, and semi-supervised domain generalization methods are confined to single-modality inputs. To overcome these limitations, we propose a unified framework featuring three key components: Consensus-Driven Consistency Regularization, which obtains reliable pseudo-labels through confident fused-unimodal consensus; Disagreement-Aware Regularization, which effectively utilizes ambiguous non-consensus samples; and Cross-Modal Prototype Alignment, which enforces domain- and modality-invariant representations while promoting robustness under missing modalities via cross-modal translation. We further establish the first SSMDG benchmarks, on which our method consistently outperforms strong baselines in both standard and missing-modality scenarios. Our benchmarks and code are available at https://github.com/lihongzhao99/SSMDG.