LLM Memory & RAG 相关度: 9/10

Mitigating Hallucination in Financial Retrieval-Augmented Generation via Fine-Grained Knowledge Verification

Taoye Yin, Haoyuan Hu, Yaxin Fan, Xinhao Chen, Xinya Wu, Kai Deng, Kezun Zhang, Feng Wang
arXiv: 2602.05723v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

论文提出一种基于强化学习和细粒度知识验证的RAG方法,缓解金融领域的幻觉问题。

主要贡献

  • 提出RLFKV框架,通过细粒度知识验证提升RAG系统可靠性
  • 引入信息量奖励,防止模型过度简化回答
  • 构建并验证了在FDD-ANT数据集上的有效性

方法论

通过将金融回复分解为知识单元,评估其正确性,使用强化学习优化,并增加信息量奖励,提升RAG系统生成结果的忠实度。

原文摘要

In financial Retrieval-Augmented Generation (RAG) systems, models frequently rely on retrieved documents to generate accurate responses due to the time-sensitive nature of the financial domain. While retrieved documents help address knowledge gaps, model-generated responses still suffer from hallucinations that contradict the retrieved information. To mitigate this inconsistency, we propose a Reinforcement Learning framework enhanced with Fine-grained Knowledge Verification (RLFKV). Our method decomposes financial responses into atomic knowledge units and assesses the correctness of each unit to compute the fine-grained faithful reward. This reward offers more precise optimization signals, thereby improving alignment with the retrieved documents. Additionally, to prevent reward hacking (e.g., overly concise replies), we incorporate an informativeness reward that encourages the policy model to retain at least as many knowledge units as the base model. Experiments conducted on the public Financial Data Description (FDD) task and our newly proposed FDD-ANT dataset demonstrate consistent improvements, confirming the effectiveness of our approach.

标签

RAG 知识验证 强化学习 金融

arXiv 分类

cs.AI