Multimodal Learning 相关度: 9/10

ViX-Ray: A Vietnamese Chest X-Ray Dataset for Vision-Language Models

Duy Vu Minh Nguyen, Chinh Thanh Truong, Phuc Hoang Tran, Hung Tuan Le, Nguyen Van-Thanh Dat, Trung Hieu Pham, Kiet Van Nguyen
arXiv: 2603.15513v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

该论文发布了包含5400张越南胸部X光片的ViX-Ray数据集,用于评估和提升VLM在越南临床领域的表现。

主要贡献

  • 创建了ViX-Ray越南胸部X光片数据集
  • 分析了数据集中的语言模式
  • 微调并评估了VLM在ViX-Ray上的表现

方法论

收集越南胸部X光片,由专家进行标注,分析语言特征,微调开源VLM,与商业VLM进行比较。

原文摘要

Vietnamese medical research has become an increasingly vital domain, particularly with the rise of intelligent technologies aimed at reducing time and resource burdens in clinical diagnosis. Recent advances in vision-language models (VLMs), such as Gemini and GPT-4V, have sparked a growing interest in applying AI to healthcare. However, most existing VLMs lack exposure to Vietnamese medical data, limiting their ability to generate accurate and contextually appropriate diagnostic outputs for Vietnamese patients. To address this challenge, we introduce ViX-Ray, a novel dataset comprising 5,400 Vietnamese chest X-ray images annotated with expert-written findings and impressions from physicians at a major Vietnamese hospital. We analyze linguistic patterns within the dataset, including the frequency of mentioned body parts and diagnoses, to identify domain-specific linguistic characteristics of Vietnamese radiology reports. Furthermore, we fine-tune five state-of-the-art open-source VLMs on ViX-Ray and compare their performance to leading proprietary models, GPT-4V and Gemini. Our results show that while several models generate outputs partially aligned with clinical ground truths, they often suffer from low precision and excessive hallucination, especially in impression generation. These findings not only demonstrate the complexity and challenge of our dataset but also establish ViX-Ray as a valuable benchmark for evaluating and advancing vision-language models in the Vietnamese clinical domain.

标签

医疗 越南语 胸部X光 VLM

arXiv 分类

cs.CL