AI Agents 相关度: 9/10

CARE: Privacy-Compliant Agentic Reasoning with Evidence Discordance

Haochen Liu, Weien Li, Rui Song, Zeyu Li, Chun Jason Xue, Xiao-Yang Liu, Sam Nallaperuma, Xue Liu, Ye Yuan
arXiv: 2604.01113v1 发布: 2026-04-01 更新: 2026-04-01

AI 摘要

针对医疗场景下证据不一致问题,提出了一种保护隐私的多阶段agent推理框架CARE。

主要贡献

  • 提出了MIMIC-DOS数据集,用于研究证据不一致情况下的预测问题
  • 提出了CARE框架,通过远程LLM指导和本地LLM决策实现隐私保护和性能提升
  • 实验证明CARE在处理冲突证据方面优于现有方法

方法论

提出了一种多阶段agent推理框架,利用远程LLM生成类别和转换,本地LLM进行证据获取和决策。

原文摘要

Large language model (LLM) systems are increasingly used to support high-stakes decision-making, but they typically perform worse when the available evidence is internally inconsistent. Such a scenario exists in real-world healthcare settings, with patient-reported symptoms contradicting medical signs. To study this problem, we introduce MIMIC-DOS, a dataset for short-horizon organ dysfunction worsening prediction in the intensive care unit (ICU) setting. We derive this dataset from the widely recognized MIMIC-IV, a publicly available electronic health record dataset, and construct it exclusively from cases in which discordance between signs and symptoms exists. This setting poses a substantial challenge for existing LLM-based approaches, with single-pass LLMs and agentic pipelines often struggling to reconcile such conflicting signals. To address this problem, we propose CARE: a multi-stage privacy-compliant agentic reasoning framework in which a remote LLM provides guidance by generating structured categories and transitions without accessing sensitive patient data, while a local LLM uses these categories and transitions to support evidence acquisition and final decision-making. Empirically, CARE achieves stronger performance across all key metrics compared to multiple baseline settings, showing that CARE can more robustly handle conflicting clinical evidence while preserving privacy.

标签

LLM Agent Privacy Healthcare Reasoning

arXiv 分类

cs.CL