HalDec-Bench: Benchmarking Hallucination Detector in Image Captioning
AI 摘要
提出了HalDec-Bench,一个评估图像描述幻觉检测器性能的基准,包含多样的模型和幻觉类型。
主要贡献
- 构建了HalDec-Bench基准,用于评估幻觉检测器。
- 提供了不同幻觉类型的细粒度标注。
- 分析揭示了现有检测器的偏见和数据集噪声问题。
方法论
构建包含多种VLM生成caption的数据集,人工标注幻觉,并进行类型分类和段落级别标注,以此评估幻觉检测器。
原文摘要
Hallucination detection in captions (HalDec) assesses a vision-language model's ability to correctly align image content with text by identifying errors in captions that misrepresent the image. Beyond evaluation, effective hallucination detection is also essential for curating high-quality image-caption pairs used to train VLMs. However, the generalizability of VLMs as hallucination detectors across different captioning models and hallucination types remains unclear due to the lack of a comprehensive benchmark. In this work, we introduce HalDec-Bench, a benchmark designed to evaluate hallucination detectors in a principled and interpretable manner. HalDec-Bench contains captions generated by diverse VLMs together with human annotations indicating the presence of hallucinations, detailed hallucination-type categories, and segment-level labels. The benchmark provides tasks with a wide range of difficulty levels and reveals performance differences across models that are not visible in existing multimodal reasoning or alignment benchmarks. Our analysis further uncovers two key findings. First, detectors tend to recognize sentences appearing at the beginning of a response as correct, regardless of their actual correctness. Second, our experiments suggest that dataset noise can be substantially reduced by using strong VLMs as filters while employing recent VLMs as caption generators. Our project page is available at https://dahlian00.github.io/HalDec-Bench-Page/.