LLM Memory & RAG 相关度: 9/10

Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models

Jaemin Kim, Jong Chul Ye
arXiv: 2603.17677v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

提出ARAM框架,通过信噪比自适应调整检索增强掩码扩散模型中的引导尺度,提升知识密集型问答性能。

主要贡献

  • 提出ARAM框架,解决扩散模型RAG中的检索冲突问题
  • 基于信噪比动态调整检索上下文引导尺度
  • 在知识密集型QA任务上验证了ARAM的有效性

方法论

ARAM根据检索上下文引入的分布偏移的信噪比,动态校准去噪过程中的引导尺度,抑制噪声信号,增强可靠信号。

原文摘要

Retrieval-Augmented Generation (RAG) improves factual grounding by incorporating external knowledge into language model generation. However, when retrieved context is noisy, unreliable, or inconsistent with the model's parametric knowledge, it introduces retrieval-prior conflicts that can degrade generation quality. While this problem has been studied in autoregressive language models, it remains largely unexplored in diffusion-based language models, where the iterative denoising process introduces unique challenges for integrating retrieved context. In this work, we propose Adaptive Retrieval-Augmented Masked Diffusion (ARAM), a training-free adaptive guidance framework for Masked Diffusion Models (MDMs) in RAG settings. ARAM dynamically calibrates the guidance scale during denoising according to the Signal-to-Noise Ratio (SNR) of the distributional shift induced by retrieved context. Intuitively, the model strengthens guidance when the retrieved context provides reliable corrective evidence and suppresses it when the contextual signal is noisy or non-supportive. Extensive experiments on multiple knowledge-intensive QA benchmarks show that ARAM improves overall QA performance over competitive RAG baselines.

标签

RAG Diffusion Model Knowledge-intensive QA Adaptive Guidance

arXiv 分类

cs.CL cs.AI cs.LG