Multimodal Learning 相关度: 9/10

VAUQ: Vision-Aware Uncertainty Quantification for LVLM Self-Evaluation

Seongheon Park, Changdae Oh, Hyeong Kyu Choi, Xuefeng Du, Sharon Li
arXiv: 2602.21054v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

VAUQ提出一种视觉感知的不确定性量化框架,用于评估LVLM对视觉依赖预测的置信度。

主要贡献

  • 提出Image-Information Score (IS)来量化视觉信息对预测的影响
  • 提出基于核心区域掩码的策略以放大显著区域的影响
  • 提出一种无监督的评分函数,能有效反映答案的正确性

方法论

通过计算核心区域掩码的Image-Information Score结合预测熵,构成一种训练无关的置信度评估方法。

原文摘要

Large Vision-Language Models (LVLMs) frequently hallucinate, limiting their safe deployment in real-world applications. Existing LLM self-evaluation methods rely on a model's ability to estimate the correctness of its own outputs, which can improve deployment reliability; however, they depend heavily on language priors and are therefore ill-suited for evaluating vision-conditioned predictions. We propose VAUQ, a vision-aware uncertainty quantification framework for LVLM self-evaluation that explicitly measures how strongly a model's output depends on visual evidence. VAUQ introduces the Image-Information Score (IS), which captures the reduction in predictive uncertainty attributable to visual input, and an unsupervised core-region masking strategy that amplifies the influence of salient regions. Combining predictive entropy with this core-masked IS yields a training-free scoring function that reliably reflects answer correctness. Comprehensive experiments show that VAUQ consistently outperforms existing self-evaluation methods across multiple datasets.

标签

LVLM Self-Evaluation Uncertainty Quantification Vision-Language Hallucination

arXiv 分类

cs.CV cs.AI cs.CL