Beyond Idealized Patients: Evaluating LLMs under Challenging Patient Behaviors in Medical Consultations
AI 摘要
评估LLM在医疗咨询中对不规范患者行为的反应,并提出了相应的评估基准。
主要贡献
- 定义了四种不规范患者行为
- 构建了CPB-Bench双语基准数据集
- 评估了多种LLM在处理这些行为时的表现
- 研究了干预策略的效果
方法论
构建包含不规范患者行为的对话数据集,并设计相应的评估指标,评估LLM的性能并研究干预策略。
原文摘要
Large language models (LLMs) are increasingly used for medical consultation and health information support. In this high-stakes setting, safety depends not only on medical knowledge, but also on how models respond when patient inputs are unclear, inconsistent, or misleading. However, most existing medical LLM evaluations assume idealized and well-posed patient questions, which limits their realism. In this paper, we study challenging patient behaviors that commonly arise in real medical consultations and complicate safe clinical reasoning. We define four clinically grounded categories of such behaviors: information contradiction, factual inaccuracy, self-diagnosis, and care resistance. For each behavior, we specify concrete failure criteria that capture unsafe responses. Building on four existing medical dialogue datasets, we introduce CPB-Bench (Challenging Patient Behaviors Benchmark), a bilingual (English and Chinese) benchmark of 692 multi-turn dialogues annotated with these behaviors. We evaluate a range of open- and closed-source LLMs on their responses to challenging patient utterances. While models perform well overall, we identify consistent, behavior-specific failure patterns, with particular difficulty in handling contradictory or medically implausible patient information. We also study four intervention strategies and find that they yield inconsistent improvements and can introduce unnecessary corrections. We release the dataset and code.