LLM Reasoning 相关度: 7/10

An interpretable prototype parts-based neural network for medical tabular data

Jacek Karolczak, Jerzy Stefanowski
arXiv: 2603.05423v1 发布: 2026-03-05 更新: 2026-03-05

AI 摘要

提出一种针对医学表格数据的可解释原型部件神经网络,兼顾精度和可解释性。

主要贡献

  • 提出一种基于原型部件的神经网络模型,专门用于医学表格数据。
  • 采用可训练的特征patching方法,从结构化数据中学习有意义的原型部件。
  • 模型预测具有概念层面的可解释性,能够与临床语言和病例推理对齐。

方法论

对诊断结果规范进行离散化,通过可训练的特征patching学习原型部件,在潜在空间中比较患者描述和学习到的原型。

原文摘要

The ability to interpret machine learning model decisions is critical in such domains as healthcare, where trust in model predictions is as important as their accuracy. Inspired by the development of prototype parts-based deep neural networks in computer vision, we propose a new model for tabular data, specifically tailored to medical records, that requires discretization of diagnostic result norms. Unlike the original vision models that rely on the spatial structure, our method employs trainable patching over features describing a patient, to learn meaningful prototypical parts from structured data. These parts are represented as binary or discretized feature subsets. This allows the model to express prototypes in human-readable terms, enabling alignment with clinical language and case-based reasoning. Our proposed neural network is inherently interpretable and offers interpretable concept-based predictions by comparing the patient's description to learned prototypes in the latent space of the network. In experiments, we demonstrate that the model achieves classification performance competitive to widely used baseline models on medical benchmark datasets, while also offering transparency, bridging the gap between predictive performance and interpretability in clinical decision support.

标签

可解释性 医学表格数据 原型学习 神经网络

arXiv 分类

cs.LG