Multimodal Learning 相关度: 9/10

FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

Haoyang Li, Liang Wang, Siyu Zhou, Jiacheng Sun, Jing Jiang, Chao Wang, Guodong Long, Yan Peng
arXiv: 2603.08708v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

针对VLMs微调中前景注意力漂移问题,提出自适应前景引导的提示调优方法。

主要贡献

  • 提出Foreground Reliability Gate,提升前景质量
  • 设计Foreground Distillation Compensation模块,引导视觉注意力
  • 引入Prior Calibration模块,缓解过度关注前景导致的泛化问题

方法论

提出FVG-PT模块,通过可学习的门控、蒸馏补偿和先验校准,自适应地引导视觉注意力关注前景。

原文摘要

CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT. Codes are available at: https://github.com/JREion/FVG-PT

标签

Vision-Language Models Prompt Tuning Foreground Attention Adaptation

arXiv 分类

cs.CV