Retrieval-Augmented Anatomical Guidance for Text-to-CT Generation
AI 摘要
提出一种检索增强的Text-to-CT生成方法,利用检索到的解剖结构信息指导生成,提高图像质量和临床一致性。
主要贡献
- 提出了检索增强的Text-to-CT生成方法
- 利用3D视觉-语言编码器检索语义相关的临床案例
- 通过ControlNet分支将解剖结构信息注入到扩散模型中
方法论
利用3D视觉-语言编码器检索相关病例,提取其解剖结构作为先验信息,通过ControlNet引导文本条件潜在扩散模型生成CT图像。
原文摘要
Text-conditioned generative models for volumetric medical imaging provide semantic control but lack explicit anatomical guidance, often resulting in outputs that are spatially ambiguous or anatomically inconsistent. In contrast, structure-driven methods ensure strong anatomical consistency but typically assume access to ground-truth annotations, which are unavailable when the target image is to be synthesized. We propose a retrieval-augmented approach for Text-to-CT generation that integrates semantic and anatomical information under a realistic inference setting. Given a radiology report, our method retrieves a semantically related clinical case using a 3D vision-language encoder and leverages its associated anatomical annotation as a structural proxy. This proxy is injected into a text-conditioned latent diffusion model via a ControlNet branch, providing coarse anatomical guidance while maintaining semantic flexibility. Experiments on the CT-RATE dataset show that retrieval-augmented generation improves image fidelity and clinical consistency compared to text-only baselines, while additionally enabling explicit spatial controllability, a capability inherently absent in such approaches. Further analysis highlights the importance of retrieval quality, with semantically aligned proxies yielding consistent gains across all evaluation axes. This work introduces a principled and scalable mechanism to bridge semantic conditioning and anatomical plausibility in volumetric medical image synthesis. Code will be released.