LLM Memory & RAG 相关度: 9/10

Mitigating LLM Hallucinations through Domain-Grounded Tiered Retrieval

Md. Asraful Haque, Aasar Mehdi, Maaz Mahboob, Tamkeen Fatima
arXiv: 2603.17872v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

提出一种基于领域知识的多层检索验证架构,用于缓解LLM的幻觉问题。

主要贡献

  • 提出了领域知识指导的多层检索和验证框架
  • 利用LangGraph实现了自调节的四阶段pipeline
  • 在多个基准测试中验证了该框架的有效性

方法论

构建包含Intrinsic Verification、Adaptive Search Routing、CRAG和Extrinsic Regeneration的流水线,并通过多阶段验证过滤不准确信息。

原文摘要

Large Language Models (LLMs) have achieved unprecedented fluency but remain susceptible to "hallucinations" - the generation of factually incorrect or ungrounded content. This limitation is particularly critical in high-stakes domains where reliability is paramount. We propose a domain-grounded tiered retrieval and verification architecture designed to systematically intercept factual inaccuracies by shifting LLMs from stochastic pattern-matchers to verified truth-seekers. The proposed framework utilizes a four-phase, self-regulating pipeline implemented via LangGraph: (I) Intrinsic Verification with Early-Exit logic to optimize compute, (II) Adaptive Search Routing utilizing a Domain Detector to target subject-specific archives, (III) Corrective Document Grading (CRAG) to filter irrelevant context, and (IV) Extrinsic Regeneration followed by atomic claim-level verification. The system was evaluated across 650 queries from five diverse benchmarks: TimeQA v2, FreshQA v2, HaluEval General, MMLU Global Facts, and TruthfulQA. Empirical results demonstrate that the pipeline consistently outperforms zero-shot baselines across all environments. Win rates peaked at 83.7% in TimeQA v2 and 78.0% in MMLU Global Facts, confirming high efficacy in domains requiring granular temporal and numerical precision. Groundedness scores remained robustly stable between 78.8% and 86.4% across factual-answer rows. While the architecture provides a robust fail-safe for misinformation, a persistent failure mode of "False-Premise Overclaiming" was identified. These findings provide a detailed empirical characterization of multi-stage RAG behavior and suggest that future work should prioritize pre-retrieval "answerability" nodes to further bridge the reliability gap in conversational AI.

标签

LLM Hallucination Retrieval-Augmented Generation Knowledge Grounding

arXiv 分类

cs.CL cs.AI