LLM Memory & RAG 相关度: 10/10

Optimizing RAG Rerankers with LLM Feedback via Reinforcement Learning

Yuhang Wu, Xiangqing Shen, Fanfan Wang, Cangqi Zhou, Zhen Wu, Xinyu Dai, Rui Xia
arXiv: 2604.02091v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

提出了一种基于强化学习的重排序偏好优化框架RRPO,提升RAG中检索结果的生成质量。

主要贡献

  • 提出RRPO框架,使用LLM反馈优化重排序器
  • 无需人工标注,降低成本
  • 实验证明优于现有重排序模型

方法论

将重排序视为序列决策过程,利用强化学习,基于LLM反馈优化上下文效用,并引入参考锚定基线。

原文摘要

Rerankers play a pivotal role in refining retrieval results for Retrieval-Augmented Generation. However, current reranking models are typically optimized on static human annotated relevance labels in isolation, decoupled from the downstream generation process. This isolation leads to a fundamental misalignment: documents identified as topically relevant by information retrieval metrics often fail to provide the actual utility required by the LLM for precise answer generation. To bridge this gap, we introduce ReRanking Preference Optimization (RRPO), a reinforcement learning framework that directly aligns reranking with the LLM's generation quality. By formulating reranking as a sequential decision-making process, RRPO optimizes for context utility using LLM feedback, thereby eliminating the need for expensive human annotations. To ensure training stability, we further introduce a reference-anchored deterministic baseline. Extensive experiments on knowledge-intensive benchmarks demonstrate that RRPO significantly outperforms strong baselines, including the powerful list-wise reranker RankZephyr. Further analysis highlights the versatility of our framework: it generalizes seamlessly to diverse readers (e.g., GPT-4o), integrates orthogonally with query expansion modules like Query2Doc, and remains robust even when trained with noisy supervisors.

标签

RAG 重排序 强化学习 LLM反馈 检索增强

arXiv 分类

cs.CL cs.AI cs.IR