Multimodal Learning 相关度: 9/10

TRU: Targeted Reverse Update for Efficient Multimodal Recommendation Unlearning

Zhanting Zhou, KaHou Tam, Ziqiang Zheng, Zeyu Ma, Zhanting Zhou
arXiv: 2604.02183v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

针对多模态推荐系统中数据难删除问题,提出TRU框架,实现有针对性的逆向更新,提升遗忘效果。

主要贡献

  • 发现多模态推荐系统删除数据的影响是不均匀分布的。
  • 提出targeted reverse update (TRU) 框架,包含 ranking fusion gate、branch-wise modality scaling 和 capacity-aware layer isolation 三个模块。
  • 实验证明TRU在保留-遗忘的权衡上优于现有基线方法,并实现了更彻底的遗忘。

方法论

TRU通过ranking融合门抑制目标项影响,模态分支缩放保留模态表示,容量感知层隔离实现局部逆向更新。

原文摘要

Multimodal recommendation systems (MRS) jointly model user-item interaction graphs and rich item content, but this tight coupling makes user data difficult to remove once learned. Approximate machine unlearning offers an efficient alternative to full retraining, yet existing methods for MRS mainly rely on a largely uniform reverse update across the model. We show that this assumption is fundamentally mismatched to modern MRS: deleted-data influence is not uniformly distributed, but concentrated unevenly across \textit{ranking behavior}, \textit{modality branches}, and \textit{network layers}. This non-uniformity gives rise to three bottlenecks in MRS unlearning: target-item persistence in the collaborative graph, modality imbalance across feature branches, and layer-wise sensitivity in the parameter space. To address this mismatch, we propose \textbf{targeted reverse update} (TRU), a plug-and-play unlearning framework for MRS. Instead of applying a blind global reversal, TRU performs three coordinated interventions across the model hierarchy: a ranking fusion gate to suppress residual target-item influence in ranking, branch-wise modality scaling to preserve retained multimodal representations, and capacity-aware layer isolation to localize reverse updates to deletion-sensitive modules. Experiments across two representative backbones, three datasets, and three unlearning regimes show that TRU consistently achieves a better retain-forget trade-off than prior approximate baselines, while security audits further confirm deeper forgetting and behavior closer to a full retraining on the retained data.

标签

多模态推荐 机器学习遗忘 数据隐私 逆向更新

arXiv 分类

cs.AI