LLM Memory & RAG 相关度: 9/10

Neuro-RIT: Neuron-Guided Instruction Tuning for Robust Retrieval-Augmented Language Model

Jaemin Kim, Jae O Lee, Sumyeong Ahn, Seo Yeon Park
arXiv: 2604.02194v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

Neuro-RIT通过神经元引导的指令微调,增强检索增强语言模型在噪声环境下的鲁棒性。

主要贡献

  • 提出Neuro-RIT框架,实现神经元级别的鲁棒性提升
  • 基于归因的神经元挖掘,区分处理相关和无关上下文的神经元
  • 两阶段指令微调策略,抑制噪声并提炼证据

方法论

通过归因挖掘关键神经元,然后分阶段指令微调,分别实现噪声抑制和证据提炼,优化模型鲁棒性。

原文摘要

Retrieval-Augmented Language Models (RALMs) have demonstrated significant potential in knowledge-intensive tasks; however, they remain vulnerable to performance degradation when presented with irrelevant or noisy retrieved contexts. Existing approaches to enhance robustness typically operate via coarse-grained parameter updates at the layer or module level, often overlooking the inherent neuron-level sparsity of Large Language Models (LLMs). To address this limitation, we propose Neuro-RIT (Neuron-guided Robust Instruction Tuning), a novel framework that shifts the paradigm from dense adaptation to precision-driven neuron alignment. Our method explicitly disentangles neurons that are responsible for processing relevant versus irrelevant contexts using attribution-based neuron mining. Subsequently, we introduce a two-stage instruction tuning strategy that enforces a dual capability for noise robustness: achieving direct noise suppression by functionally deactivating neurons exclusive to irrelevant contexts, while simultaneously optimizing targeted layers for evidence distillation. Extensive experiments across diverse QA benchmarks demonstrate that Neuro-RIT consistently outperforms strong baselines and robustness-enhancing methods.

标签

Retrieval-Augmented Language Model Instruction Tuning Neuron-level Sparsity Robustness

arXiv 分类

cs.CL cs.AI