LLM Memory & RAG 相关度: 9/10

Hypothesis-Conditioned Query Rewriting for Decision-Useful Retrieval

Hangeol Chang, Changsun Lee, Seungjoon Rho, Junho Yeo, Jong Chul Ye
arXiv: 2603.19008v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

提出HCQR框架,通过假设引导的查询重写提升RAG在决策任务中的表现。

主要贡献

  • 提出 Hypothesis-Conditioned Query Rewriting (HCQR) 框架
  • 设计三种目标查询策略:支持假设、区分选项、验证线索
  • 实验证明 HCQR 优于传统 RAG 方法

方法论

HCQR首先根据问题和选项构建假设,然后重写为三种查询,分别用于支持假设、区分选项和验证线索,最后基于检索结果进行决策。

原文摘要

Retrieval-Augmented Generation (RAG) improves Large Language Models (LLMs) by grounding generation in external, non-parametric knowledge. However, when a task requires choosing among competing options, simply grounding generation in broadly relevant context is often insufficient to drive the final decision. Existing RAG methods typically rely on a single initial query, which often favors topical relevance over decision-relevant evidence, and therefore retrieves background information that can fail to discriminate among answer options. To address this issue, here we propose Hypothesis-Conditioned Query Rewriting (HCQR), a training-free pre-retrieval framework that reorients RAG from topic-oriented retrieval to evidence-oriented retrieval. HCQR first derives a lightweight working hypothesis from the input question and candidate options, and then rewrites retrieval into three targeted queries that seek evidence to: (1) support the hypothesis, (2) distinguish it from competing alternatives, and (3) verify salient clues in the question. This approach enables context retrieval that is more directly aligned with answer selection, allowing the generator to confirm or overturn the initial hypothesis based on the retrieved evidence. Experiments on MedQA and MMLU-Med show that HCQR consistently outperforms single-query RAG and re-rank/filter baselines, improving average accuracy over Simple RAG by 5.9 and 3.6 points, respectively. Code is available at https://anonymous.4open.science/r/HCQR-1C2E.

标签

RAG Query Rewriting Decision Making Retrieval

arXiv 分类

cs.CL cs.AI cs.LG