Multimodal Learning 相关度: 9/10

Learning to Rank Caption Chains for Video-Text Alignment

Ansel Blume, Burak Uzkent, Shalini Chaudhuri, Garin Kessler
arXiv: 2603.25145v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

提出基于排序优化的视频文本对齐方法,并发现视觉编码器微调的重要性。

主要贡献

  • 提出基于排序的优化方法,提升视频文本对齐效果
  • 通过caption degradation生成大规模caption chain
  • 强调视觉编码器微调对DPO类方法的重要性

方法论

利用重复的caption degradation生成有序caption chain,并采用排序优化方法训练模型,用于视频文本对齐。

原文摘要

Direct preference optimization (DPO) is an effective technique to train language models to generate preferred over dispreferred responses. However, this binary "winner-takes-all" approach is suboptimal for vision-language models whose response quality is highly dependent on visual content. In particular, a response may still be faithful to the visual inputs even if it is less preferable than an alternative. The standard Bradley-Terry DPO formulation lacks this nuance, upweighting winning responses without sufficient regard for whether the "losing" response still maintains high visual fidelity. In this work, we investigate ranking optimization as an alternative that more precisely situates responses' faithfulness to visual inputs. We focus on video-text alignment using detailed video captions, proposing a method to generate challenging, totally ordered caption chains at scale through repeated caption degradation. Our results show ranking optimization outperforms binary DPO for long-form content generation and assessment, and importantly, we find that these approaches require finetuning of the vision encoder to be effective, challenging the view of DPO as purely a language-reweighting process.

标签

视频文本对齐 排序优化 视觉编码器微调 caption degradation Direct Preference Optimization (DPO)

arXiv 分类

cs.CV cs.LG