Multimodal Learning 相关度: 9/10

Fine-tuning Pre-trained Vision-Language Models in a Human-Annotation-Free Manner

Qian-Wei Wang, Guanghao Meng, Ren Cai, Yaguang Song, Shu-Tao Xia
arXiv: 2602.04337v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

CoFT提出了一种无需人工标注的视觉语言模型微调框架,通过双模型协作提升性能。

主要贡献

  • 提出 Collaborative Fine-Tuning (CoFT)框架
  • 引入双提示学习策略,建模伪标签纯净度
  • 结合动量对比学习和LLM生成提示,进一步提升性能

方法论

CoFT利用双模型和交叉模态协作,通过正负文本提示和两阶段训练,实现无监督的视觉语言模型微调。

原文摘要

Large-scale vision-language models (VLMs) such as CLIP exhibit strong zero-shot generalization, but adapting them to downstream tasks typically requires costly labeled data. Existing unsupervised self-training methods rely on pseudo-labeling, yet often suffer from unreliable confidence filtering, confirmation bias, and underutilization of low-confidence samples. We propose Collaborative Fine-Tuning (CoFT), an unsupervised adaptation framework that leverages unlabeled data through a dual-model, cross-modal collaboration mechanism. CoFT introduces a dual-prompt learning strategy with positive and negative textual prompts to explicitly model pseudo-label cleanliness in a sample-dependent manner, removing the need for hand-crafted thresholds or noise assumptions. The negative prompt also regularizes lightweight visual adaptation modules, improving robustness under noisy supervision. CoFT employs a two-phase training scheme, transitioning from parameter-efficient fine-tuning on high-confidence samples to full fine-tuning guided by collaboratively filtered pseudo-labels. Building on CoFT, CoFT+ further enhances adaptation via iterative fine-tuning, momentum contrastive learning, and LLM-generated prompts. Extensive experiments demonstrate consistent gains over existing unsupervised methods and even few-shot supervised baselines.

标签

视觉语言模型 无监督学习 微调 自训练

arXiv 分类

cs.CV cs.AI