Agent Tuning & Optimization 相关度: 8/10

FlexRec: Adapting LLM-based Recommenders for Flexible Needs via Reinforcement Learning

Yijun Pan, Weikang Qiu, Qiyao Ma, Mingxuan Ju, Tong Zhao, Neil Shah, Rex Ying
arXiv: 2603.11901v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

FlexRec利用强化学习微调LLM,解决推荐系统中动态需求下的排序问题,显著提升推荐效果。

主要贡献

  • 提出了FlexRec框架,用于适应不同需求的LLM推荐。
  • 设计了基于反事实交换的item-level奖励机制,提升训练信号。
  • 引入了基于critic的不确定性感知缩放,稳定了稀疏反馈下的学习。

方法论

使用强化学习对LLM进行后训练,通过精心设计的奖励函数和不确定性感知机制优化推荐排序。

原文摘要

Modern recommender systems must adapt to dynamic, need-specific objectives for diverse recommendation scenarios, yet most traditional recommenders are optimized for a single static target and struggle to reconfigure behavior on demand. Recent advances in reinforcement-learning-based post-training have unlocked strong instruction-following and reasoning capabilities in LLMs, suggesting a principled route for aligning them to complex recommendation goals. Motivated by this, we study closed-set autoregressive ranking, where an LLM generates a permutation over a fixed candidate set conditioned on user context and an explicit need instruction. However, applying RL to this setting faces two key obstacles: (i) sequence-level rewards yield coarse credit assignment that fails to provide fine-grained training signals, and (ii) interaction feedback is sparse and noisy, which together lead to inefficient and unstable updates. We propose FlexRec, a post-training RL framework that addresses both issues with (1) a causally grounded item-level reward based on counterfactual swaps within the remaining candidate pool, and (2) critic-guided, uncertainty-aware scaling that explicitly models reward uncertainty and down-weights low-confidence rewards to stabilize learning under sparse supervision. Across diverse recommendation scenarios and objectives, FlexRec achieves substantial gains: it improves NDCG@5 by up to \textbf{59\%} and Recall@5 by up to \textbf{109.4\%} in need-specific ranking, and further achieves up to \textbf{24.1\%} Recall@5 improvement under generalization settings, outperforming strong traditional recommenders and LLM-based baselines.

标签

Reinforcement Learning Large Language Models Recommender Systems

arXiv 分类

cs.LG