Scale Can't Overcome Pragmatics: The Impact of Reporting Bias on Vision-Language Reasoning
AI 摘要
VLMs推理能力不足源于训练数据中的报告偏差,扩大规模不能解决,需有针对性地数据标注。
主要贡献
- 揭示VLMs推理能力不足的根本原因是训练数据中的报告偏差
- 证明扩大数据规模、模型规模和语言种类不能有效提升VLMs的推理能力
- 提出通过专门收集隐性信息标注的数据可以有效提升VLMs的推理能力
方法论
通过理论分析和实验验证,从语用学角度分析VLMs训练数据,并使用自建benchmark评估VLMs推理能力。
原文摘要
The lack of reasoning capabilities in Vision-Language Models (VLMs) has remained at the forefront of research discourse. We posit that this behavior stems from a reporting bias in their training data. That is, how people communicate about visual content by default omits tacit information needed to supervise some types of reasoning; e.g., "at the game today!" is a more likely caption than "a photo of 37 people standing behind a field". We investigate the data underlying the popular VLMs OpenCLIP, LLaVA-1.5 and Molmo through the lens of theories from pragmatics, and find that reporting bias results in insufficient representation of four reasoning skills (spatial, temporal, negation, and counting), despite the corpora being of web-scale, and/or synthetically generated. With a set of curated benchmarks, we demonstrate that: (i) VLMs perform poorly on the aforementioned types of reasoning suppressed in the training data by reporting bias; (ii) contrary to popular belief, scaling data size, model size, and to multiple languages does not result in emergence of these skills by default; but, promisingly, (iii) incorporating annotations specifically collected to obtain tacit information is effective. Our findings highlight the need for more intentional training data curation methods, rather than counting on scale for emergence of reasoning capabilities.