ACT Now: Preempting LVLM Hallucinations via Adaptive Context Integration
AI 摘要
ACT通过自适应上下文整合,有效减少LVLM的幻觉问题,提升视觉-语言对齐效果。
主要贡献
- 提出视觉上下文探索,自适应增强视觉探索的注意力头
- 提出语义上下文聚合,有效聚合视觉证据,解决信息损失问题
- 提出ACT,一种无需训练的推理干预方法,减少LVLM幻觉
方法论
ACT通过视觉上下文探索增强视觉关注,并利用语义上下文聚合来弥补信息损失,实现更好的视觉-语言对齐。
原文摘要
Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.