西安电子科技大学学报 ›› 2023, Vol. 50 ›› Issue (4): 54-64.doi: 10.19665/j.issn1001-2400.2023.04.006

• 网络空间安全专栏 • 上一篇    下一篇

自适应差分隐私的高效深度学习方案

王玉画(),高胜(),朱建明(),黄晨()   

  1. 中央财经大学 信息学院,北京 100081
  • 收稿日期:2023-01-12 出版日期:2023-08-20 发布日期:2023-10-17
  • 通讯作者: 朱建明
  • 作者简介:王玉画(2000—),女,中央财经大学大学硕士研究生,E-mail:wyh1921352947@163.com;|高胜(1987—),男,教授,博士,E-mail:sgao@cufe.edu.cn;|黄晨(1997—),男,中央财经大学大学硕士研究生,E-mail:ichuang12@163.com
  • 基金资助:
    国家自然科学基金(62072487);北京市自然科学基金(M21036)

Efficient deep learning scheme with adaptive differential privacy

WANG Yuhua(),GAO Sheng(),ZHU Jianming(),HUANG Chen()   

  1. School of Information,Central University of Finance and Economics,Beijing 102206,China
  • Received:2023-01-12 Online:2023-08-20 Published:2023-10-17
  • Contact: Jianming ZHU

摘要:

深度学习在诸多领域取得成功的同时,也逐渐暴露出严重的隐私安全问题。作为一种轻量级隐私保护技术,差分隐私通过对模型添加噪声使得输出结果对数据集中的任意一条数据都不敏感,更适合现实中个人用户隐私保护的场景。针对现有大多差分隐私深度学习方案中迭代次数对隐私预算的依赖、数据可用性较低和模型收敛速度较慢等问题,提出了一种自适应差分隐私的高效深度学习方案。首先,基于沙普利加性解释模型设计了一种自适应差分隐私机制,通过对样本特征加噪使得迭代次数独立于隐私预算,再利用函数机制扰动损失函数,从而实现对原始样本和标签的双重保护,同时增强数据可用性。其次,利用自适应矩估计算法调整学习率来加快模型收敛速度。并且,引入零集中差分隐私作为隐私损失统计机制,降低因隐私损失超过隐私预算带来的隐私泄露风险。最后,对方案的隐私性进行理论分析,并在MNIST和Fashion-MNIST数据集上通过对比实验,验证了所提方案的有效性。

关键词: 深度学习, 差分隐私, 自适应, 隐私损失, 模型收敛

Abstract:

While deep learning has achieved a great success in many fields,it has also gradually exposed a series of serious privacy security issues.As a lightweight privacy protection technology,differential privacy makes the output insensitive to any data in the dataset by adding noise to the model,which is more suitable for the privacy protection of individual users in reality.Aiming at the problems of the dependence of iterations on the privacy budget,low data availability and slow model convergence in most existing differential private deep learning schemes,an efficient deep learning scheme based on adaptive differential privacy is proposed.First,an adaptive differential privacy mechanism is designed based on the Shapley additive explanation model.By adding noise to the sample features,the number of iterations is independent of the privacy budget,and then the loss function is perturbed by the function mechanism,thus achieving the dual protection of original samples and labels while enhancing the utility of data.Second,the adaptive moment estimation algorithm is used to adjust the learning rate to accelerate the model convergence.Additionally,zero-centralized difference privacy is introduced as a statistical mechanism of privacy loss,which reduces the risk of privacy leakage caused by the privacy loss exceeding the privacy budget.Finally,a theoretical analysis of privacy is made,with the effectiveness of the proposed scheme verified by comparative experiments on the MNIST and Fashion-MNIST datasets.

Key words: deep learning, differential privacy, self-adaptation, privacy loss, model convergence

中图分类号: 

  • TP309
Baidu
map