JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation
AI 摘要
论文提出JAMMEval,一个经过精细化处理的日语VLM评测基准,提升评测可靠性。
主要贡献
- 构建高质量日语VQA评测基准JAMMEval
- 通过人工标注改进数据质量和评测可靠性
- 验证了JAMMEval在评测VLM能力上的有效性
方法论
通过两轮人工标注,系统性地改进了七个现有的日语基准数据集,提升数据质量和评测的可靠性。
原文摘要
Reliable evaluation is essential for the development of vision-language models (VLMs). However, Japanese VQA benchmarks have undergone far less iterative refinement than their English counterparts. As a result, many existing benchmarks contain issues such as ambiguous questions, incorrect answers, and instances that can be solved without visual grounding, undermining evaluation reliability and leading to misleading conclusions in model comparisons. To address these limitations, we introduce JAMMEval, a refined collection of Japanese benchmarks for reliable VLM evaluation. It is constructed by systematically refining seven existing Japanese benchmark datasets through two rounds of human annotation, improving both data quality and evaluation reliability. In our experiments, we evaluate open-weight and proprietary VLMs on JAMMEval and analyze the capabilities of recent models on Japanese VQA. We further demonstrate the effectiveness of our refinement by showing that the resulting benchmarks yield evaluation scores that better reflect model capability, exhibit lower run-to-run variance, and improve the ability to distinguish between models of different capability levels. We release our dataset and code to advance reliable evaluation of VLMs.