Concept-Enhanced Multimodal RAG: Towards Interpretable and Accurate Radiology Report Generation
AI 摘要
CEMRAG通过融合临床概念和多模态RAG,提升放射报告生成的可解释性和准确性。
主要贡献
- 提出Concept-Enhanced Multimodal RAG (CEMRAG)框架
- 将视觉表示分解为可解释的临床概念
- 证明了透明的视觉概念可以提高诊断准确性
方法论
CEMRAG将视觉表征分解为临床概念,并将其与多模态RAG结合,通过丰富的上下文提示生成报告。
原文摘要
Radiology Report Generation (RRG) through Vision-Language Models (VLMs) promises to reduce documentation burden, improve reporting consistency, and accelerate clinical workflows. However, their clinical adoption remains limited by the lack of interpretability and the tendency to hallucinate findings misaligned with imaging evidence. Existing research typically treats interpretability and accuracy as separate objectives, with concept-based explainability techniques focusing primarily on transparency, while Retrieval-Augmented Generation (RAG) methods targeting factual grounding through external retrieval. We present Concept-Enhanced Multimodal RAG (CEMRAG), a unified framework that decomposes visual representations into interpretable clinical concepts and integrates them with multimodal RAG. This approach exploits enriched contextual prompts for RRG, improving both interpretability and factual accuracy. Experiments on MIMIC-CXR and IU X-Ray across multiple VLM architectures, training regimes, and retrieval configurations demonstrate consistent improvements over both conventional RAG and concept-only baselines on clinical accuracy metrics and standard NLP measures. These results challenge the assumed trade-off between interpretability and performance, showing that transparent visual concepts can enhance rather than compromise diagnostic accuracy in medical VLMs. Our modular design decomposes interpretability into visual transparency and structured language model conditioning, providing a principled pathway toward clinically trustworthy AI-assisted radiology.