Multimodal Learning 相关度: 9/10

Conformal Cross-Modal Active Learning

Huy Hoang Nguyen, Cédric Jung, Shirin Salehi, Tobias Glück, Anke Schmeink, Andreas Kugi
arXiv: 2603.23159v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

CCMA利用跨模态信息,提升视觉任务主动学习的数据效率,优于现有单模态方法。

主要贡献

  • 提出Conformal Cross-Modal Acquisition (CCMA)框架
  • 利用预训练VLM作为教师模型提供语义不确定性估计
  • 通过多模态一致性评分与多样性选择提升数据效率

方法论

构建教师-学生架构,利用预训练VLM的跨模态信息指导视觉模型的样本选择,并使用一致性校准。

原文摘要

Foundation models for vision have transformed visual recognition with powerful pretrained representations and strong zero-shot capabilities, yet their potential for data-efficient learning remains largely untapped. Active Learning (AL) aims to minimize annotation costs by strategically selecting the most informative samples for labeling, but existing methods largely overlook the rich multimodal knowledge embedded in modern vision-language models (VLMs). We introduce Conformal Cross-Modal Acquisition (CCMA), a novel AL framework that bridges vision and language modalities through a teacher-student architecture. CCMA employs a pretrained VLM as a teacher to provide semantically grounded uncertainty estimates, conformally calibrated to guide sample selection for a vision-only student model. By integrating multimodal conformal scoring with diversity-aware selection strategies, CCMA achieves superior data efficiency across multiple benchmarks. Our approach consistently outperforms state-of-the-art AL baselines, demonstrating clear advantages over methods relying solely on uncertainty or diversity metrics.

标签

主动学习 跨模态学习 视觉语言模型 不确定性估计

arXiv 分类

cs.CV cs.LG