TSHA: A Benchmark for Visual Language Models in Trustworthy Safety Hazard Assessment Scenarios
AI 摘要
论文提出了TSHA基准,用于评估视觉语言模型在可信安全风险评估中的能力,解决了现有基准的局限性。
主要贡献
- 构建了更真实的TSHA基准数据集,包含多种来源的数据
- 提出了更全面的安全评估任务和评估协议
- 证明了现有VLMs在安全评估任务上的不足
方法论
通过收集现有数据集、互联网图像、AIGC图像和新捕获图像,构建包含81809个样本的TSHA基准,并设计测试集评估模型鲁棒性。
原文摘要
Recent advances in vision-language models (VLMs) have accelerated their application to indoor safety hazards assessment. However, existing benchmarks suffer from three fundamental limitations: (1) heavy reliance on synthetic datasets constructed via simulation software, creating a significant domain gap with real-world environments; (2) oversimplified safety tasks with artificial constraints on hazard and scene types, thereby limiting model generalization; and (3) absence of rigorous evaluation protocols to thoroughly assess model capabilities in complex home safety scenarios. To address these challenges, we introduce TSHA (\textbf{T}rustworthy \textbf{S}afety \textbf{H}azards \textbf{A}ssessment), a comprehensive benchmark comprising 81,809 carefully curated training samples drawn from four complementary sources: existing indoor datasets, internet images, AIGC images, and newly captured images. This benchmark set also includes a highly challenging test set with 1707 samples, comprising not only a carefully selected subset from the training distribution but also newly added videos and panoramic images containing multiple safety hazards, used to evaluate the model's robustness in complex safety scenarios. Extensive experiments on 23 popular VLMs demonstrate that current VLMs lack robust capabilities for safety hazard assessment. Importantly, models trained on the TSHA training set not only achieve a significant performance improvement of up to +18.3 points on the TSHA test set but also exhibit enhanced generalizability across other benchmarks, underscoring the substantial contribution and importance of the TSHA benchmark.