Multimodal Learning 相关度: 7/10

Following the Diagnostic Trace: Visual Cognition-guided Cooperative Network for Chest X-Ray Diagnosis

Shaoxuan Wu, Jingkun Chen, Chong Ma, Cong Shen, Xiao Zhang, Jun Feng
arXiv: 2602.21657v1 发布: 2026-02-25 更新: 2026-02-25

AI 摘要

VCC-Net利用视觉认知指导胸部X光诊断,提升AI辅助诊断的可靠性和可解释性。

主要贡献

  • 提出VCC-Net,实现视觉认知引导的协同诊断范式
  • 利用眼动追踪或鼠标捕捉放射科医生的视觉搜索轨迹和注意力模式
  • 构建认知图协同编辑模块,整合医生视觉认知与模型推理

方法论

VCC-Net通过捕捉放射科医生的视觉认知,学习分层视觉搜索策略,并构建认知图融合医生知识和模型推理。

原文摘要

Computer-aided diagnosis (CAD) has significantly advanced automated chest X-ray diagnosis but remains isolated from clinical workflows and lacks reliable decision support and interpretability. Human-AI collaboration seeks to enhance the reliability of diagnostic models by integrating the behaviors of controllable radiologists. However, the absence of interactive tools seamlessly embedded within diagnostic routines impedes collaboration, while the semantic gap between radiologists' decision-making patterns and model representations further limits clinical adoption. To overcome these limitations, we propose a visual cognition-guided collaborative network (VCC-Net) to achieve the cooperative diagnostic paradigm. VCC-Net centers on visual cognition (VC) and employs clinically compatible interfaces, such as eye-tracking or the mouse, to capture radiologists' visual search traces and attention patterns during diagnosis. VCC-Net employs VC as a spatial cognition guide, learning hierarchical visual search strategies to localize diagnostically key regions. A cognition-graph co-editing module subsequently integrates radiologist VC with model inference to construct a disease-aware graph. The module captures dependencies among anatomical regions and aligns model representations with VC-driven features, mitigating radiologist bias and facilitating complementary, transparent decision-making. Experiments on the public datasets SIIM-ACR, EGD-CXR, and self-constructed TB-Mouse dataset achieved classification accuracies of 88.40%, 85.05%, and 92.41%, respectively. The attention maps produced by VCC-Net exhibit strong concordance with radiologists' gaze distributions, demonstrating a mutual reinforcement of radiologist and model inference. The code is available at https://github.com/IPMI-NWU/VCC-Net.

标签

医学影像 计算机辅助诊断 视觉认知 人机协作 胸部X光

arXiv 分类

cs.CV cs.AI