Is Conformal Factuality for RAG-based LLMs Robust? Novel Metrics and Systematic Insights
AI 摘要
论文评估了RAG中一致性事实性过滤的可靠性和实用性,揭示了其在分布偏移下的脆弱性。
主要贡献
- 提出了信息感知的评估指标,更贴近实际任务
- 揭示了一致性过滤在高事实性水平下的低效性
- 发现基于蕴含关系的轻量级验证器性能优于LLM评分器
方法论
通过在多个基准和模型上进行实验,分析一致性事实性过滤在生成、评分、校准、鲁棒性和效率方面的表现。
原文摘要
Large language models (LLMs) frequently hallucinate, limiting their reliability in knowledge-intensive applications. Retrieval-augmented generation (RAG) and conformal factuality have emerged as potential ways to address this limitation. While RAG aims to ground responses in retrieved evidence, it provides no statistical guarantee that the final output is correct. Conformal factuality filtering offers distribution-free statistical reliability by scoring and filtering atomic claims using a threshold calibrated on held-out data, however, the informativeness of the final output is not guaranteed. We systematically analyze the reliability and usefulness of conformal factuality for RAG-based LLMs across generation, scoring, calibration, robustness, and efficiency. We propose novel informativeness-aware metrics that better reflect task utility under conformal filtering. Across three benchmarks and multiple model families, we find that (i) conformal filtering suffers from low usefulness at high factuality levels due to vacuous outputs, (ii) conformal factuality guarantee is not robust to distribution shifts and distractors, highlighting the limitation that requires calibration data to closely match deployment conditions, and (iii) lightweight entailment-based verifiers match or outperform LLM-based model confidence scorers while requiring over $100\times$ fewer FLOPs. Overall, our results expose factuality-informativeness trade-offs and fragility of conformal filtering framework under distribution shifts and distractors, highlighting the need for new approaches for reliability with robustness and usefulness as key metrics, and provide actionable guidance for building RAG pipelines that are both reliable and computationally efficient.