Multimodal Learning 相关度: 8/10

PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding

Baolong Bi, Yuyao Ge, Shenghua Liu, Yuchen He, Siqian Tong, Lizhe Chen, Lingrui Mei, Zehao Li, Yiwei Wang, Yujun Cai, Ming-Hsuan Yang, Xueqi Cheng
arXiv: 2602.20696v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

PromptCD提出一种测试时行为控制方法,通过对比学习提升LLM和VLM的可靠性和安全性。

主要贡献

  • 提出Polarity-Prompt Contrastive Decoding (PromptCD),一种测试时行为控制方法。
  • 将对比解码扩展到更广泛的增强目标,适用于LLM和VLM。
  • 实验证明PromptCD能有效提升LLM的3H指标和VLM的VQA性能。

方法论

PromptCD构建正负引导提示对,通过对比模型响应(token概率分布或视觉注意力),强化期望行为。

原文摘要

Reliable AI systems require large language models (LLMs) to exhibit behaviors aligned with human preferences and values. However, most existing alignment approaches operate at training time and rely on additional high-quality data, incurring significant computational and annotation costs. While recent work has shown that contrastive decoding can leverage a model's internal distributions to improve specific capabilities, its applicability remains limited to narrow behavioral scopes and scenarios. In this work, we introduce Polarity-Prompt Contrastive Decoding (PromptCD), a test-time behavior control method that generalizes contrastive decoding to broader enhancement settings. PromptCD constructs paired positive and negative guiding prompts for a target behavior and contrasts model responses-specifically token-level probability distributions in LLMs and visual attention patterns in VLMs-to reinforce desirable outcomes. This formulation extends contrastive decoding to a wide range of enhancement objectives and is applicable to both LLMs and Vision-Language Models (VLMs) without additional training. For LLMs, experiments on the "3H" alignment objectives (helpfulness, honesty, and harmlessness) demonstrate consistent and substantial improvements, indicating that post-trained models can achieve meaningful self-enhancement purely at test time. For VLMs, we further analyze contrastive effects on visual attention, showing that PromptCD significantly improves VQA performance by reinforcing behavior-consistent visual grounding. Collectively, these results highlight PromptCD as a simple, general, and cost-efficient strategy for reliable behavior control across modalities.

标签

对比学习 行为控制 大型语言模型 视觉语言模型 测试时干预

arXiv 分类

cs.AI