摘要
神经网络的优化算法在训练模型时所使用的数据集可能包含用户的敏感信息,容易在训练的过程中造成隐私泄露。为了在神经网络优化算法Adam上实现差分隐私保护,提出差分隐私保护下的Adam优化算法(DP-Adam)。将Adam优化算法与差分隐私理论结合,在神经网络反向传播的Adam梯度下降更新参数过程中加入满足差分隐私的拉普拉斯噪声,从而达到对神经网络优化算法进行隐私保护的目的。实验表明,对于相同的隐私预算,随着训练轮数的增加,DP-Adam训练模型的精度优于DP-SGD。在达到同样的模型精度的条件下,DP-Adam所需要的隐私预算更小,即DP-Adam的隐私保护程度比DP-SGD更高。
The data set of neural network optimization algorithm used in training model may contain sensitive information of users,which may cause privacy leakage during the training process.In order to achieve differential privacy protection on the neural networks optimization algorithm Adam,we propose Adam optimization algorithm based on differential privacy protection(DP-Adam).It combined the Adam optimization algorithm with the differential privacy theory,and added the Laplacian noise that satisfied the differential privacy in the process of updating the parameters of the Adam gradient descent in the back propagation of the neural network,so as to achieve the purpose of privacy protection for the neural networks optimization algorithm.Experiments show that for the same privacy budget,the accuracy of DP-Adam training model is better than DP-SGD with the increasing training epoch.Under the same model accuracy,DP Adam needs a smaller privacy budget,that is,DP Adam has a higher level of privacy protection than DP-SGD.
作者
李敏
李红娇
陈杰
Li Min;Li Hongjiao;Chen Jie(College of Computer Science and Technology,Shanghai University of Electric Power,Shanghai 200090,China)
出处
《计算机应用与软件》
北大核心
2020年第6期253-258,296,共7页
Computer Applications and Software
基金
国家自然科学基金项目(61403247,61702321)
上海市信息安全综合管理技术研究重点实验室开放课题(AGK2015005)。
关键词
神经网络
差分隐私
Adam算法
隐私保护
Neural Network
Differential privacy
Adam algorithm
Privacy protection
作者简介
李敏,硕士生,主研领域:深度学习,差分隐私;李红娇,副教授;陈杰,硕士生。