LLM Reasoning 相关度: 8/10

Reasoning-guided Collaborative Filtering with Language Models for Explainable Recommendation

Fahad Anwaar, Adil Mehmood Khan, Muhammad Khalid, Usman Zia, Kezhi Wang
arXiv: 2602.05544v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

提出RGCF-XRec,利用语言模型和协同过滤知识,实现可解释的序列推荐,提升效果和效率。

主要贡献

  • 提出 reasoning-guided CF 知识增强方法
  • 高效的四维度评分机制过滤噪声
  • 统一表示学习网络编码协同和语义信号

方法论

构建混合框架RGCF-XRec,通过上下文提示增强CF知识,利用四维度评分机制和统一表示网络,实现可解释的序列推荐。

原文摘要

Large Language Models (LLMs) exhibit potential for explainable recommendation systems but overlook collaborative signals, while prevailing methods treat recommendation and explanation as separate tasks, resulting in a memory footprint. We present RGCF-XRec, a hybrid framework that introduces reasoning-guided collaborative filtering (CF) knowledge into a language model to deliver explainable sequential recommendations in a single step. Theoretical grounding and empirical findings reveal that RGCF-XRec offers three key merits over leading CF-aware LLM-based methods: (1) reasoning-guided augmentation of CF knowledge through contextual prompting to discover latent preferences and interpretable reasoning paths; (2) an efficient scoring mechanism based on four dimensions: coherence, completeness, relevance, and consistency to mitigate noisy CF reasoning traces and retain high-quality explanations; (3) a unified representation learning network that encodes collaborative and semantic signals, enabling a structured prompt to condition the LLM for explainable sequential recommendation. RGCF-XRec demonstrates consistent improvements across Amazon datasets, Sports, Toys, and Beauty, comprising 642,503 user-item interactions. It improves HR@10 by 7.38\% in Sports and 4.59\% in Toys, along with ROUGE-L by 8.02\% and 3.49\%, respectively. It reduces the cold warm performance gap, achieving overall gains of 14.5\% in cold-start and 11.9\% in warm start scenarios, and enhances zero-shot HR@5 by 18.54\% in Beauty and 23.16\% in Toys, highlighting effective generalization and robustness. Moreover, RGCF-XRec achieves training efficiency with a lightweight LLaMA 3.2-3B backbone, ensuring scalability for real-world applications.

标签

推荐系统 可解释性 语言模型 协同过滤

arXiv 分类

cs.AI