LLM Reasoning 相关度: 9/10

Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Chengkai Wang, Baisong Liu
arXiv: 2603.03080v1 发布: 2026-03-03 更新: 2026-03-03

AI 摘要

论文提出PURE框架,通过选择与用户偏好一致的证据,生成更可信的推荐解释。

主要贡献

  • 形式化了偏好不一致解释问题
  • 提出了PURE框架,通过选择偏好对齐的证据来生成解释
  • 提出了偏好不一致性的评估指标

方法论

PURE框架采用select-then-generate范式,选择与用户偏好对齐的多跳推理路径,并利用结构感知的prompt注入LLM生成解释。

原文摘要

LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such preference-inconsistent explanations yield logically valid but unconvincing reasoning and are largely missed by standard hallucination or faithfulness metrics. We formalize this failure mode and propose PURE, a preference-aware reasoning framework following a select-then-generate paradigm. Instead of only improving generation, PURE intervenes in evidence selection, it selects a compact set of multi-hop item-centric reasoning paths that are both factually grounded and aligned with user preference structure, guided by user intent, specificity, and diversity to suppress generic, weakly personalized evidence. The selected evidence is then injected into LLM generation via structure-aware prompting that preserves relational constraints. To measure preference inconsistency, we introduce a feature-level, user-centric evaluation metric that reveals misalignment overlooked by factuality-based measures. Experiments on three real-world datasets show that PURE consistently reduces preference-inconsistent explanations and factual hallucinations while maintaining competitive recommendation accuracy, explanation quality, and inference efficiency. These results highlight that trustworthy explanations require not only factual correctness but also justification aligned with user preferences.

标签

推荐系统 可解释性 LLM 用户偏好 推理

arXiv 分类

cs.AI