从用户?项目评分矩阵中学习用户对项目的个性化偏好,对于评分推荐来说至关重要。许多推荐方法如潜在因子模型,无法充分利用评分矩阵中的交互信息学到较好的个性化偏好而得到较差推荐效果。受深度学习中Wide and Deep模型应用于APP推荐启...从用户?项目评分矩阵中学习用户对项目的个性化偏好,对于评分推荐来说至关重要。许多推荐方法如潜在因子模型,无法充分利用评分矩阵中的交互信息学到较好的个性化偏好而得到较差推荐效果。受深度学习中Wide and Deep模型应用于APP推荐启发,本文提出一种深度混合模型并命名为DeepHM用于评分推荐。与Wide and Deep模型相比,使用DeepWide和DNN部分重构Wide模型和Deep模型得到DeepHM,并且DeepWide和DNN部分共享交互信息输入。因此,DeepHM可以更有效地使用评分矩阵中的用户和项目的交互信息学到个性化偏好信息。DeepHM将评分推荐作为分类问题旨在提高推荐准确性。实验表明在公开的Movielens数据集上DeepHM算法相比现有的基于评分推荐模型具有更好的效果。展开更多
Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attacke...Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attackers to obtain detailed knowledge of the target model in actual scenarios,so using gradient optimization to generate adversarial samples in the local surrogate model has become an effective black‐box attack strategy.However,these methods suffer from gradients falling into local minima,limiting the transferability of the adversarial samples.This reduces the attack's effectiveness and often ignores the imperceptibility of the generated adversarial samples.To address these challenges,we propose a novel attack algorithm called PGMRS‐KL that combines pre‐gradient‐guided momentum gradient optimization strategy and fake user generation constrained by Kullback‐Leibler divergence.Specifically,the algorithm combines the accumulated gradient direction with the previous step's gradient direction to iteratively update the adversarial samples.It uses KL loss to minimize the distribution distance between fake and real user data,achieving high transferability and imperceptibility of the adversarial samples.Experimental results demonstrate the superiority of our approach over state‐of‐the‐art gradient‐based attack algorithms in terms of attack transferability and the generation of imperceptible fake user data.展开更多
文摘从用户?项目评分矩阵中学习用户对项目的个性化偏好,对于评分推荐来说至关重要。许多推荐方法如潜在因子模型,无法充分利用评分矩阵中的交互信息学到较好的个性化偏好而得到较差推荐效果。受深度学习中Wide and Deep模型应用于APP推荐启发,本文提出一种深度混合模型并命名为DeepHM用于评分推荐。与Wide and Deep模型相比,使用DeepWide和DNN部分重构Wide模型和Deep模型得到DeepHM,并且DeepWide和DNN部分共享交互信息输入。因此,DeepHM可以更有效地使用评分矩阵中的用户和项目的交互信息学到个性化偏好信息。DeepHM将评分推荐作为分类问题旨在提高推荐准确性。实验表明在公开的Movielens数据集上DeepHM算法相比现有的基于评分推荐模型具有更好的效果。
基金The National Natural Science Foundation of China (61876001)Opening Foundation of State Key Laboratory of Cognitive Intelligence,Opening Foundation of State Key Laboratory of Cognitive Intelligence(iED2022-006)Scientific Research Planning Project of Anhui Province (2022AH050072)
文摘Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attackers to obtain detailed knowledge of the target model in actual scenarios,so using gradient optimization to generate adversarial samples in the local surrogate model has become an effective black‐box attack strategy.However,these methods suffer from gradients falling into local minima,limiting the transferability of the adversarial samples.This reduces the attack's effectiveness and often ignores the imperceptibility of the generated adversarial samples.To address these challenges,we propose a novel attack algorithm called PGMRS‐KL that combines pre‐gradient‐guided momentum gradient optimization strategy and fake user generation constrained by Kullback‐Leibler divergence.Specifically,the algorithm combines the accumulated gradient direction with the previous step's gradient direction to iteratively update the adversarial samples.It uses KL loss to minimize the distribution distance between fake and real user data,achieving high transferability and imperceptibility of the adversarial samples.Experimental results demonstrate the superiority of our approach over state‐of‐the‐art gradient‐based attack algorithms in terms of attack transferability and the generation of imperceptible fake user data.