When Visual Evidence is Ambiguous: Pareidolia as a Diagnostic Probe for Vision Models
AI 摘要
利用人脸空想性错觉,分析多种视觉模型在歧义情况下的判别能力,揭示了表征选择对模型行为的影响。
主要贡献
- 提出了一个用于分析视觉模型在歧义情况下行为的诊断框架。
- 使用人脸空想性错觉作为受控探针,研究不同视觉模型的检测、定位、不确定性和偏差。
- 揭示了视觉语言模型、纯视觉分类模型和目标检测模型在处理歧义时的不同机制。
方法论
构建包含人脸空想性错觉的图像数据集,通过统一协议评估六种模型,分析其在不同难度和情感下的表现。
原文摘要
When visual evidence is ambiguous, vision models must decide whether to interpret face-like patterns as meaningful. Face pareidolia, the perception of faces in non-face objects, provides a controlled probe of this behavior. We introduce a representation-level diagnostic framework that analyzes detection, localization, uncertainty, and bias across class, difficulty, and emotion in face pareidolia images. Under a unified protocol, we evaluate six models spanning four representational regimes: vision-language models (VLMs; CLIP-B/32, CLIP-L/14, LLaVA-1.5-7B), pure vision classification (ViT), general object detection (YOLOv8), and face detection (RetinaFace). Our analysis reveals three mechanisms of interpretation under ambiguity. VLMs exhibit semantic overactivation, systematically pulling ambiguous non-human regions toward the Human concept, with LLaVA-1.5-7B producing the strongest and most confident over-calls, especially for negative emotions. ViT instead follows an uncertainty-as-abstention strategy, remaining diffuse yet largely unbiased. Detection-based models achieve low bias through conservative priors that suppress pareidolia responses even when localization is controlled. These results show that behavior under ambiguity is governed more by representational choices than score thresholds, and that uncertainty and bias are decoupled: low uncertainty can signal either safe suppression, as in detectors, or extreme over-interpretation, as in VLMs. Pareidolia therefore provides a compact diagnostic and a source of ambiguity-aware hard negatives for probing and improving the semantic robustness of vision-language systems. Code will be released upon publication.