Retrieving Counterfactuals Improves Visual In-Context Learning
AI 摘要
CIRCLES通过检索反事实样例,提升视觉上下文学习中视觉语言模型(VLMs)的推理能力。
主要贡献
- 提出CIRCLES框架,通过属性引导的组合图像检索构建反事实样例集
- 通过反事实样例提升VLMs对因果关系的推理能力
- 实验证明CIRCLES在小规模模型和信息稀缺条件下表现更优
方法论
提出CIRCLES框架,使用属性引导的组合图像检索策略,检索反事实风格的样例,并将其用于VLMs的上下文学习。
原文摘要
Vision-language models (VLMs) have achieved impressive performance across a wide range of multimodal reasoning tasks, but they often struggle to disentangle fine-grained visual attributes and reason about underlying causal relationships. In-context learning (ICL) offers a promising avenue for VLMs to adapt to new tasks, but its effectiveness critically depends on the selection of demonstration examples. Existing retrieval-augmented approaches typically rely on passive similarity-based retrieval, which tends to select correlated but non-causal examples, amplifying spurious associations and limiting model robustness. We introduce CIRCLES (Composed Image Retrieval for Causal Learning Example Selection), a novel framework that actively constructs demonstration sets by retrieving counterfactual-style examples through targeted, attribute-guided composed image retrieval. By incorporating counterfactual-style examples, CIRCLES enables VLMs to implicitly reason about the causal relations between attributes and outcomes, moving beyond superficial correlations and fostering more robust and grounded reasoning. Comprehensive experiments on four diverse datasets demonstrate that CIRCLES consistently outperforms existing methods across multiple architectures, especially on small-scale models, with pronounced gains under information scarcity. Furthermore, CIRCLES retrieves more diverse and causally informative examples, providing qualitative insights into how models leverage in-context demonstrations for improved reasoning. Our code is available at https://github.com/gzxiong/CIRCLES.