Comprehensive Comparison of RAG Methods Across Multi-Domain Conversational QA
AI 摘要
论文系统比较了多种RAG方法在多轮对话QA中的表现,发现简单方法通常优于复杂方法。
主要贡献
- 系统性地比较了多种RAG方法在多轮对话QA任务中的性能。
- 揭示了不同RAG方法在不同数据集上的性能差异以及影响因素。
- 强调了检索策略与数据集结构对RAG性能的重要性。
方法论
论文采用统一的实验设置,在八个数据集上评估了不同RAG方法的检索质量和生成质量,并分析了性能随对话轮数的变化。
原文摘要
Conversational question answering increasingly relies on retrieval-augmented generation (RAG) to ground large language models (LLMs) in external knowledge. Yet, most existing studies evaluate RAG methods in isolation and primarily focus on single-turn settings. This paper addresses the lack of a systematic comparison of RAG methods for multi-turn conversational QA, where dialogue history, coreference, and shifting user intent substantially complicate retrieval. We present a comprehensive empirical study of vanilla and advanced RAG methods across eight diverse conversational QA datasets spanning multiple domains. Using a unified experimental setup, we evaluate retrieval quality and answer generation using generator and retrieval metrics, and analyze how performance evolves across conversation turns. Our results show that robust yet straightforward methods, such as reranking, hybrid BM25, and HyDE, consistently outperform vanilla RAG. In contrast, several advanced techniques fail to yield gains and can even degrade performance below the No-RAG baseline. We further demonstrate that dataset characteristics and dialogue length strongly influence retrieval effectiveness, explaining why no single RAG strategy dominates across settings. Overall, our findings indicate that effective conversational RAG depends less on method complexity than on alignment between the retrieval strategy and the dataset structure. We publish the code used.\footnote{\href{https://github.com/Klejda-A/exp-rag.git}{GitHub Repository}}