LLM Memory & RAG 相关度: 9/10

Enhancing Large Language Models (LLMs) for Telecom using Dynamic Knowledge Graphs and Explainable Retrieval-Augmented Generation

Dun Yuan, Hao Zhou, Xue Liu, Hao Chen, Yan Xin, Jianzhong, Zhang
arXiv: 2602.17529v1 发布: 2026-02-19 更新: 2026-02-19

AI 摘要

论文提出KG-RAG框架,结合知识图谱与检索增强生成,提升LLM在电信领域的准确性和可靠性。

主要贡献

  • 提出KG-RAG框架
  • 利用知识图谱增强LLM在电信领域的知识
  • 实验证明KG-RAG优于LLM-only和标准RAG

方法论

构建电信领域知识图谱,结合检索增强生成技术,动态检索相关知识来提升LLM的输出质量和准确性。

原文摘要

Large language models (LLMs) have shown strong potential across a variety of tasks, but their application in the telecom field remains challenging due to domain complexity, evolving standards, and specialized terminology. Therefore, general-domain LLMs may struggle to provide accurate and reliable outputs in this context, leading to increased hallucinations and reduced utility in telecom operations.To address these limitations, this work introduces KG-RAG-a novel framework that integrates knowledge graphs (KGs) with retrieval-augmented generation (RAG) to enhance LLMs for telecom-specific tasks. In particular, the KG provides a structured representation of domain knowledge derived from telecom standards and technical documents, while RAG enables dynamic retrieval of relevant facts to ground the model's outputs. Such a combination improves factual accuracy, reduces hallucination, and ensures compliance with telecom specifications.Experimental results across benchmark datasets demonstrate that KG-RAG outperforms both LLM-only and standard RAG baselines, e.g., KG-RAG achieves an average accuracy improvement of 14.3% over RAG and 21.6% over LLM-only models. These results highlight KG-RAG's effectiveness in producing accurate, reliable, and explainable outputs in complex telecom scenarios.

标签

知识图谱 检索增强生成 电信领域 LLM

arXiv 分类

cs.AI