IEEE Access | |
Differential Privacy Preservation in Robust Continual Learning | |
Ahmad Hassanpour1  Ahmed Abdelhadi1  Bian Yang2  Julian Fierrez2  Christoph Busch3  Majid Moradikia3  | |
[1] Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gj&x00F8;Engineering Technology Department, University of Houston, Houston, TX, USA;vik, Norway; | |
关键词: Differential privacy; continual learning; deep learning; privacy; | |
DOI : 10.1109/ACCESS.2022.3154826 | |
来源: DOAJ |
【 摘 要 】
Enhancing the privacy of machine learning (ML) algorithms has become crucial with the presence of different types of attacks on AI applications. Continual learning (CL) is a branch of ML with the aim of learning a set of knowledge sequentially and continuously from a data stream. On the other hand, differential privacy (DP) has been extensively used to enhance the privacy of deep learning (DL) models. However, the task of adding DP to CL would be challenging, because on one hand the DP intrinsically adds some noise that reduce the utility, on the other hand the endless learning procedure of CL is a serious obstacle, resulting in the catastrophic forgetting (CF) of previous samples of ongoing stream. To be able to add DP to CL, we have proposed a methodology by which we cannot only strike a tradeoff between privacy and utility, but also mitigate the CF. The proposed solution presents a set of key features: (1) it guarantees theoretical privacy bounds via enforcing the DP principle; (2) we further incorporate a robust procedure into the proposed DP-CL scheme to hinder the CF; and (3) most importantly, it achieves practical continuous training for a CL process without running out of the available privacy budget. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of the proposed solution.
【 授权许可】
Unknown