Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attacke...Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attackers to obtain detailed knowledge of the target model in actual scenarios,so using gradient optimization to generate adversarial samples in the local surrogate model has become an effective black‐box attack strategy.However,these methods suffer from gradients falling into local minima,limiting the transferability of the adversarial samples.This reduces the attack's effectiveness and often ignores the imperceptibility of the generated adversarial samples.To address these challenges,we propose a novel attack algorithm called PGMRS‐KL that combines pre‐gradient‐guided momentum gradient optimization strategy and fake user generation constrained by Kullback‐Leibler divergence.Specifically,the algorithm combines the accumulated gradient direction with the previous step's gradient direction to iteratively update the adversarial samples.It uses KL loss to minimize the distribution distance between fake and real user data,achieving high transferability and imperceptibility of the adversarial samples.Experimental results demonstrate the superiority of our approach over state‐of‐the‐art gradient‐based attack algorithms in terms of attack transferability and the generation of imperceptible fake user data.展开更多
基金The National Natural Science Foundation of China (61876001)Opening Foundation of State Key Laboratory of Cognitive Intelligence,Opening Foundation of State Key Laboratory of Cognitive Intelligence(iED2022-006)Scientific Research Planning Project of Anhui Province (2022AH050072)
文摘Attackers inject the designed adversarial sample into the target recommendation system to achieve illegal goals,seriously affecting the security and reliability of the recommendation system.It is difficult for attackers to obtain detailed knowledge of the target model in actual scenarios,so using gradient optimization to generate adversarial samples in the local surrogate model has become an effective black‐box attack strategy.However,these methods suffer from gradients falling into local minima,limiting the transferability of the adversarial samples.This reduces the attack's effectiveness and often ignores the imperceptibility of the generated adversarial samples.To address these challenges,we propose a novel attack algorithm called PGMRS‐KL that combines pre‐gradient‐guided momentum gradient optimization strategy and fake user generation constrained by Kullback‐Leibler divergence.Specifically,the algorithm combines the accumulated gradient direction with the previous step's gradient direction to iteratively update the adversarial samples.It uses KL loss to minimize the distribution distance between fake and real user data,achieving high transferability and imperceptibility of the adversarial samples.Experimental results demonstrate the superiority of our approach over state‐of‐the‐art gradient‐based attack algorithms in terms of attack transferability and the generation of imperceptible fake user data.