LLM Memory & RAG 相关度: 9/10

AILS-NTUA at SemEval-2026 Task 8: Evaluating Multi-Turn RAG Conversations

Dimosthenis Athanasiou, Maria Lymperaiou, Giorgos Filandrianos, Athanasios Voulodimos, Giorgos Stamou
arXiv: 2603.10524v1 发布: 2026-03-11 更新: 2026-03-11

AI 摘要

该论文提出了一种基于查询多样性和多阶段生成pipeline的RAG系统,并在 SemEval-2026 Task 8 中取得优异成绩。

主要贡献

  • 提出一种查询多样性的检索策略
  • 设计多阶段生成pipeline
  • 分析了端到端RAG的瓶颈

方法论

采用多个LLM进行查询重构,使用稀疏检索器,并通过多阶段生成流程生成答案。

原文摘要

We present the AILS-NTUA system for SemEval-2026 Task 8 (MTRAGEval), addressing all three subtasks of multi-turn retrieval-augmented generation: passage retrieval (A), reference-grounded response generation (B), and end-to-end RAG (C). Our unified architecture is built on two principles: (i) a query-diversity-over-retriever-diversity strategy, where five complementary LLM-based query reformulations are issued to a single corpus-aligned sparse retriever and fused via variance-aware nested Reciprocal Rank Fusion; and (ii) a multistage generation pipeline that decomposes grounded generation into evidence span extraction, dual-candidate drafting, and calibrated multi-judge selection. Our system ranks 1st in Task A (nDCG@5: 0.5776, +20.5% over the strongest baseline) and 2nd in Task B (HM: 0.7698). Empirical analysis shows that query diversity over a well-aligned retriever outperforms heterogeneous retriever ensembling, and that answerability calibration-rather than retrieval coverage-is the primary bottleneck in end-to-end performance.

标签

RAG 多轮对话 检索增强生成 LLM Query Reformulation

arXiv 分类

cs.CL