CompactRAG: Reducing LLM Calls and Token Overhead in Multi-Hop Question Answering
AI 摘要
CompactRAG通过离线知识库构建和在线高效推理,显著降低多跳问答中的LLM调用和token消耗。
主要贡献
- 提出CompactRAG框架,解耦离线知识库构建和在线推理
- 构建原子QA知识库,减少LLM推理步骤
- 通过实体一致性保持和RoBERTa答案抽取提升准确率和效率
方法论
离线阶段LLM构建原子QA知识库,在线阶段分解问题并重写,通过检索和答案抽取得到答案,LLM调用仅两次。
原文摘要
Retrieval-augmented generation (RAG) has become a key paradigm for knowledge-intensive question answering. However, existing multi-hop RAG systems remain inefficient, as they alternate between retrieval and reasoning at each step, resulting in repeated LLM calls, high token consumption, and unstable entity grounding across hops. We propose CompactRAG, a simple yet effective framework that decouples offline corpus restructuring from online reasoning. In the offline stage, an LLM reads the corpus once and converts it into an atomic QA knowledge base, which represents knowledge as minimal, fine-grained question-answer pairs. In the online stage, complex queries are decomposed and carefully rewritten to preserve entity consistency, and are resolved through dense retrieval followed by RoBERTa-based answer extraction. Notably, during inference, the LLM is invoked only twice in total - once for sub-question decomposition and once for final answer synthesis - regardless of the number of reasoning hops. Experiments on HotpotQA, 2WikiMultiHopQA, and MuSiQue demonstrate that CompactRAG achieves competitive accuracy while substantially reducing token consumption compared to iterative RAG baselines, highlighting a cost-efficient and practical approach to multi-hop reasoning over large knowledge corpora. The implementation is available at GitHub.