Multimodal Learning 相关度: 9/10

Vision-Language Models vs Human: Perceptual Image Quality Assessment

Imran Mehmood, Imad Ali Shah, Ming Ronnier Luo, Brian Deegan
arXiv: 2603.24578v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

该论文评估了视觉语言模型在图像质量评估任务中与人类感知的对齐程度,并分析了不同属性的影响。

主要贡献

  • 系统性地对比了六个VLMs与人类在图像质量评估上的表现
  • 揭示了VLMs在不同图像质量属性(对比度、色彩度)上的表现差异
  • 分析了VLMs在评估总体偏好时对不同属性的权重分配

方法论

通过心理物理实验获取人类感知数据,并将其与六个VLMs的预测结果进行对比分析,评估模型的性能。

原文摘要

Psychophysical experiments remain the most reliable approach for perceptual image quality assessment (IQA), yet their cost and limited scalability encourage automated approaches. We investigate whether Vision Language Models (VLMs) can approximate human perceptual judgments across three image quality scales: contrast, colorfulness and overall preference. Six VLMs four proprietary and two openweight models are benchmarked against psychophysical data. This work presents a systematic benchmark of VLMs for perceptual IQA through comparison with human psychophysical data. The results reveal strong attribute dependent variability models with high human alignment for colorfulness (ρup to 0.93) underperform on contrast and vice-versa. Attribute weighting analysis further shows that most VLMs assign higher weights to colorfulness compared to contrast when evaluating overall preference similar to the psychophysical data. Intramodel consistency analysis reveals a counterintuitive tradeoff: the most self consistent models are not necessarily the most human aligned suggesting response variability reflects sensitivity to scene dependent perceptual cues. Furthermore, human-VLM agreement is increased with perceptual separability, indicating VLMs are more reliable when stimulus differences are clearly expressed.

标签

视觉语言模型 图像质量评估 心理物理学 感知 VLM

arXiv 分类

cs.CV eess.IV