LLM Memory & RAG 相关度: 8/10

Understanding the Interplay between LLMs' Utilisation of Parametric and Contextual Knowledge: A keynote at ECIR 2025

Isabelle Augenstein
arXiv: 2603.09654v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

该论文探讨了LLM参数知识与上下文知识之间的相互作用,以及如何解决知识冲突问题。

主要贡献

  • 提出评估LLM知识的方法
  • 设计诊断知识冲突的测试
  • 理解成功利用上下文知识的特征

方法论

通过评估LLM知识、诊断知识冲突和分析上下文知识的特征来进行研究。

原文摘要

Language Models (LMs) acquire parametric knowledge from their training process, embedding it within their weights. The increasing scalability of LMs, however, poses significant challenges for understanding a model's inner workings and further for updating or correcting this embedded knowledge without the significant cost of retraining. Moreover, when using these language models for knowledge-intensive language understanding tasks, LMs have to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge. Nevertheless, studies indicate that LMs often ignore the provided context as it can be in conflict with the pre-existing LM's memory learned during pre-training. Conflicting knowledge can also already be present in the LM's parameters, termed intra-memory conflict. This underscores the importance of understanding the interplay between how a language model uses its parametric knowledge and the retrieved contextual knowledge. In this talk, I will aim to shed light on this important issue by presenting our research on evaluating the knowledge present in LMs, diagnostic tests that can reveal knowledge conflicts, as well as on understanding the characteristics of successfully used contextual knowledge.

标签

LLM 知识冲突 上下文学习 参数知识

arXiv 分类

cs.CL cs.IR