Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization
AI 摘要
VRF通过概率偏好分解和不确定性感知,提升LLM个性化效果。
主要贡献
- 提出不确定性感知的变分奖励分解框架VRF
- 利用概率偏好基学习用户偏好分布
- 通过方差衰减损失降低不确定估计的影响
方法论
使用变分编码器推断用户分布,通过Wasserstein距离匹配共享概率基,并利用方差衰减损失优化。
原文摘要
Reward factorization personalizes large language models (LLMs) by decomposing rewards into shared basis functions and user-specific weights. Yet, existing methods estimate user weights from scarce data in isolation and as deterministic points, leading to inaccurate and unreliable inference. We introduce Variational Reward Factorization (VRF), an uncertainty-aware framework that represents each user's preferences as a variational distribution in a shared preference space. VRF infers user distributions via a variational encoder, derives weights through Wasserstein distance matching with shared probabilistic bases, and downweights uncertain estimates through a variance-attenuated loss. On three benchmarks, VRF outperforms all baselines across seen and unseen users, few-shot scenarios, and varying uncertainty levels, with gains extending to downstream alignment.