Learning to Unlearn for Bayesian Personalized Ranking via Influence Function
-
Graphical Abstract
-
Abstract
Learning recommender models from vast amounts of behavioral data has become a mainstream paradigm in recent information systems. Conversely, with privacy awareness grown, there has been increasing attention to the removal of sensitive or outlier data from well-trained recommendation models (known as recommendation unlearning). However, current unlearning methods primarily focus on fully/partially retraining the entire model. Despite considerable performance, it inevitably introduces significant efficiency bottlenecks, which is impractical for latency-sensitive streaming services. While recent efforts exploit efficient unlearning in point-wise recommender tasks, these approaches overlook the partial order relationships between items, resulting in suboptimal performance in both recommendation and unlearning capabilities. In light of this, we explore learning to Unlearn for Bayesian Personalized Ranking (UBPR) via influence function, which relies on a pair-wise ranking loss to model user preferences and item characteristics, making unlearning more challenging than in point-wise settings. Specifically, we propose an influence function-guided unlearning framework tailored for pair-wise ranking models to efficiently perform unlearning requests, which involves unlearning partial order relationships while handling negative samples appropriately during the unlearning process. Besides, we prove that our proposed method can theoretically match the performance of retraining counterparts. Finally, we conduct extensive experiments to validate the effectiveness and efficiency of our model.
-
-