Multimodal Learning 相关度: 9/10

Cytoarchitecture in Words: Weakly Supervised Vision-Language Modeling for Human Brain Microscopy

Matthew Sutton, Katrin Amunts, Timo Dickscheid, Christian Schiffer
arXiv: 2602.23088v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

提出一种基于弱监督的视觉-语言模型,用于描述人脑显微图像中的细胞结构。

主要贡献

  • 提出了一种基于标签介导的图像-文本弱监督学习方法
  • 将现有的细胞结构视觉基础模型(CytoNet)与大型语言模型相连接
  • 在脑区描述和分类任务中取得了较好的性能

方法论

利用已有的细胞结构视觉基础模型和大型语言模型,通过标签将图像与相关文献中的描述联系起来,进行图像-文本的弱监督训练。

原文摘要

Foundation models increasingly offer potential to support interactive, agentic workflows that assist researchers during analysis and interpretation of image data. Such workflows often require coupling vision to language to provide a natural-language interface. However, paired image-text data needed to learn this coupling are scarce and difficult to obtain in many research and clinical settings. One such setting is microscopic analysis of cell-body-stained histological human brain sections, which enables the study of cytoarchitecture: cell density and morphology and their laminar and areal organization. Here, we propose a label-mediated method that generates meaningful captions from images by linking images and text only through a label, without requiring curated paired image-text data. Given the label, we automatically mine area descriptions from related literature and use them as synthetic captions reflecting canonical cytoarchitectonic attributes. An existing cytoarchitectonic vision foundation model (CytoNet) is then coupled to a large language model via an image-to-text training objective, enabling microscopy regions to be described in natural language. Across 57 brain areas, the resulting method produces plausible area-level descriptions and supports open-set use through explicit rejection of unseen areas. It matches the cytoarchitectonic reference label for in-scope patches with 90.6% accuracy and, with the area label masked, its descriptions remain discriminative enough to recover the area in an 8-way test with 68.6% accuracy. These results suggest that weak, label-mediated pairing can suffice to connect existing biomedical vision foundation models to language, providing a practical recipe for integrating natural-language in domains where fine-grained paired annotations are scarce.

标签

视觉-语言模型 弱监督学习 生物医学图像处理 细胞结构分析

arXiv 分类

cs.CV