A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks

Tosin A. Adesuyi 2  and Byeong Man Kim 1
  • 1 Department of Software Engineering, Kumoh National Institute of Technology, South Korea, Gumi
  • 2 Department of Software Engineering, Kumoh National Institute of Technology, South Korea, Gumi

Abstract

Data is the key to information mining that unveils hidden knowledge. The ability to revealed knowledge relies on the extractable features of a dataset and likewise the depth of the mining model. Conversely, several of these datasets embed sensitive information that can engender privacy violation and are subsequently used to build deep neural network (DNN) models. Recent approaches to enact privacy and protect data sensitivity in DNN models does decline accuracy, thus, giving rise to significant accuracy disparity between a non-private DNN and a privacy preserving DNN model. This accuracy gap is due to the enormous uncalculated noise flooding and the inability to quantify the right level of noise required to perturb distinct neurons in the DNN model, hence, a dent in accuracy. Consequently, this has hindered the use of privacy protected DNN models in real life applications. In this paper, we present a neuron noise-injection technique based on layer-wise buffered contribution ratio forwarding and ϵ-differential privacy technique to preserve privacy in a DNN model. We adapt a layer-wise relevance propagation technique to compute contribution ratio for each neuron in our network at the pre-training phase. Based on the proportion of each neuron’s contribution ratio, we generate a noise-tuple via the Laplace mechanism, and this helps to eliminate unwanted noise flooding. The noise-tuple is subsequently injected into the training network through its neurons to preserve privacy of the training dataset in a differentially private manner. Hence, each neuron receives right proportion of noise as estimated via contribution ratio, and as a result, unquantifiable noise that drops accuracy of privacy preserving DNN models is avoided. Extensive experiments were conducted based on three real-world datasets and their results show that our approach was able to narrow down the existing accuracy gap to a close proximity, as well outperforms the state-of-the-art approaches in this context.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] Shokri R., Shmatikov V., Privacy-Preserving Deep Learning, CCS’15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, 1310-1321

  • [2] Hannun A., Case C., Casper J., Catanzaro B., Diamos G., Elsen E., Prenger R., Satheesh S., Sengupta S., Coates A., et al., Deepspeech: Scaling up end-to-end speech recognition, 2014, arXiv:1412.5567

  • [3] He K., Zhang X., Ren S., Sun J., Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification, 2015, arXiv:1502.01852

  • [4] Fredrikson M., Jha S., Ristenpart T., Model inversion attacks that exploit confidence information and basic countermeasures, In CCS. 2015, 1322-1333 ACM

  • [5] Lowd D, Meek. C, Adversarial Learning. KDD ’05 Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, 2005, 641-647

  • [6] Abadi M., Chu A., Goodfellow I., McMahan H. B., Mironnov I., Talwar K., Zhang L., Deep Learning with Differential Privacy, 2016, arXiv:1607.00133v2

  • [7] Sci Y., Okumura H., Ohsuga A., Privacy-Preserving of Deep Neural Networks, IEE Computer Society, 2016, 1418-1425

  • [8] Phan N., Wu X., Hu H., Dou D., Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning, 2018, arXiv: 1709.05750v2

  • [9] Chabanne H., Wargny A. D., Milgram J., Morel C., Prouff E., Privacy-Preserving Classification on Deep Neural Network, Sefran Identity & Security, Cryptology ePrint Archive, 2017

  • [10] Ji Z., Lipton Z. C., Elkan C., Differential Privacy and Machine Learning: a Survey and Review., 2015 arXiv: 1412.7584v1, 1-30

  • [11] He J., Cai L., Differential Private Noise Adding Mechanism and Its Application on Consensus, 2017, arX:1611.08936v2 [cs.IT]

  • [12] Dwork C., McSherry F., Nissim K., Smith A., Calibrating Noise to Sensitivity in Private Data Analysis, In Proc., Theory of Cryptography, 2006, 265-284

  • [13] Binder, A., Bach S., Montavon G., Muller K. R., Samek W., Layer-wise Relevance Propagation for Deep Neural Network Architectures, In: Kim K., Joukov N. (eds) Information Science and Applications (ICISA); Lecture Notes in Electrical Engineering, vol. 376, Springer, Singapore, 2016, 913-922

  • [14] Phan N., Wu X., Dou D., Preserving Differential Privacy in Convolutional Deep Belief Networks, 2018, arXiv: 1706.8839v2

  • [15] Ohm P., Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. REV., 2010, 1701-1756

  • [16] Chin A., Klinefelter A., Differential Privacy as a Response to the Reidentification Threat: The Facebook Advertiser Case Study. 90 N. C. L. REV., 2012, 1452-1454

  • [17] Ren H., Li H., Liang X., He S., Dai Y., Zhao L., Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees, Sensors 2016, 16, 1463

  • [18] Wang J., Zhu R., Liu S., Cai Z., Node Location Privacy Protection Based on Differentially Private Grids in Industrial Wireless Sensor Networks, Sensors 2018, 18, 410

  • [19] Bach S., Blinder A., Montavon G., Klauschen F., Muller K. R., Samek W., On Pixel-Wise Explanations for Non-linear Classifier Decisions by Layer-wise Relevance Propagation, PLoS ONE; 2015, 10(7), p. e0130140

  • [20] Binder A., Montavon G., Bach S., Muller K. R., Samek W., Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers, 2016, arXiv:1604 .00825v1

  • [21] Zhang J., Zhang S., Xiao X., Yang Y., Winslett M., Functional Mechanism: Regression Analysis under Differential Privacy, Proceedings of the VLDB Endowment, 2012, 5(11), 1364-1375

  • [22] Dwork C., Differential privacy. In Encyclopedia of Cryptography and Security (2nd Ed.), 2011, 338-340

  • [23] Dwork C., Kenthapadi K., McSherry F., Mironov I., Naor M., Our data, ourselves: Privacy via distributed noise generation; In International Conference on the Theory and Applications of Cryptographic Techniques. 2006, 486-503

  • [24] Friedman A., Schuster A., Data Mining with Differential Privacy, In Proc., ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2010, 493-502

  • [25] McSherry F. D., Privacy Integrated Queries: An Extensible Platform for Privacy-Preserving Data Analysis. In SIGMOD’09: Proceedings of the 2009 ACM SIGMOD International Conference on management of Data, 2009, 19-30

  • [26] Li H., Xiong L., Ohno-Machado L., Jiang, X. Privacy Preserving RBF kernel Support Vector Machine, BioMed Research International, 2014

  • [27] Charest A., Han Y., On the Meaning and Limits of Empirical Differential Privacy. Journal of Privacy and Confidentiality, 2017, 7(3), 53-66

  • [28] Dwork C., Differential privacy: A Survey of Results; In TAMC, 2008, 1-19

  • [29] Yang Y., Zheng K., Wu C., Yang Y., Improving the Classification Effectiveness of Intrusion Detection by Using Improved Conditional Variational AutoEncoder and Deep Neural Network, Sensors 2019, 19, 2528

  • [30] Zhang R., Peng Z., Wu L., Yao B., Guan Y., Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence, Sensors 2017, 17, 549.

  • [31] Shokri R., Shmatikov V., Privacy-Preserving Deep Learning, In ACM Conference on Computer and Communications Security (CCS), 2015, 910

  • [32] Graves A., Mohamed A. R., Hinton G., Speech recognition with deep recurrent neural networks, In ICASSP., 2013

  • [33] Taigman Y., Yang M., Ranzato M., Wolf L., Deepface: Closing the gap to human-level performance in face verification., In CVPR., 2014

  • [34] Krizhevsky A., Sutskever I., Hinton G. Imagenet classification with deep convolutional neural networks, In NIPS., 2012

  • [35] Ciresan D., Meier U., Schmidhuber J., Multi-column Deep Neural Networks for Image Classification., 2012, arXiv:1202.2745v1 [cs.CV].

  • [36] Pattanayak, S. Pro Deep Learning with Tensorflow: A Mathematical Approach to Advanced Artificial Intelligence in Python, Apress Media, 2017, 109

  • [37] Lin J. C. –W., Zhang Y., Zhang B., Fournier-Viger P., Djenouri Y., Hiding Sensitive Itemsets with Multiple Objective Optimization, In Soft Computing, 2019

  • [38] Cha S. -C., Hsu T. –Y., Xiang Y., Yeh K. –H., Privacy Enhancing Technologies in the Internet of Things: Perspectives and Challenges, IEEE Internet of Things Journal, 2019, 6(2), 2159-2187

  • [39] Cha S. –C., Chuang M. –S., Yeh K. –H., Huang Z. –J. Su C., A User-Friendly Privacy Framework for Users to Achieve Consents With Nearby BLE Devices, IEEE Access, 2018, Vol. 6, 20779-20787

  • [40] Zhou L., Yeh K. –H., Hancke G., Liu Z., Su C., Security and Privacy for the Industrial Internet of Things: An Overview of approaches to safeguarding endpoints, IEEE Signal Processing Magazine 2018, 76-87

  • [41] Wu J. M. –T., Lin J. C. –W., Fournier-Viger P., Djenouri Y., Chen C. –H., Li Z., The Density-Based Clustering Method for Privacy-Preserving Data Mining, Mathematical Biosciences and Engineering. 2019; 16(3), 1718-1728

  • [42] Salem M., Taheri S., Yuan J. –S., Utilizing Transfer Learning and Homomorphic Encryption in a Privacy Preserving and Secure Biometric Recognition System. Computers 2019, 8, 3.

  • [43] Lin J. C. –W., Wu J. M. –T., Fournier-Viger P., Djenouri Y., Chen C. –H., Zhang Y., A Sanitization Approach to Secure Shared Data in an IoT Environment, IEEE Access, 2019, Vol. 7, 25359-25368.

  • [44] Blum A., Ligett K., Roth A., A learning theory approach to noninteractive database privacy, In ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, 2008, 609-618

  • [45] Chaudhuri K., Sarwate A. D., Sinha K., Near-optimal differentially private principal components, In Advances in Neural Information Processing Systems, 2012, 998-1006

  • [46] Ji Z., Elkan C, Differential Privacy based on importance weighting, In Machine Learning, 2013, 163-183

  • [47] Sala A., Zhao X., Wilson C., Zheng H., Zhao B. Y., Sharing graphs using differentially private graph models, In Internet Measurement Conference, 2011, 81-98

  • [48] Vaidya J., Shafiq B., Basu A., Hong, Y, Differentially private naïve Bayes classification, In Web Intelligence, 2013, 571-576

  • [49] Phan N., Wang Y., Wu X., Dou D., Differential Privacy Preservation for Deep Auto-Encoder: An Application of Human Behavior Prediction, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAA-16), 2016, 1309-1316.

  • [50] Adesuyi T. A., Kim B. M., A layer-wise Perturbation based Privacy Preserving Deep Neural Network. In Proc. IEEE International Conference on Artificial Intelligence in Information and Communication, 2019, 389-394.

  • [51] Wang J., Liu S., Li Y., A Review of Differential Privacy in Individual Data Release. International Journal of Distributed Sensor Networks, 2015

  • [52] http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/, Accessed: 20th August, 2018

  • [53] https://archive.ics.uci.edu/ml/machine-learning-databases/dermatology/, Accessed: 4th September, 2018

  • [54] Tutuncu K., Koklu M., Comparison of Classification Techniques on Dermatological Dataset. International Journal of Biomedical Science & Bioinformatics, 2016, 3(1)

  • [55] Patel S., Patel A., An Empirical Study of Applying Artificial Neural Network for Classification of Dermatology Disease. Indian Journal of Science and Technology, 2017, Vol 10(17), DOI:10.17485/ijst/2017/v10i17/1127 08

  • [56] Mandal S., Banerjee I., Cancer Classification Using Neural Network, International Journal of Emerging Engineering Research and Technology, 2015, 3(7)

  • [57] Abdel-Ilah L., Šahinbegoviü H., Using machine learning tool in classification of breast cancer, In: Badnjevic A. (eds) CMBEBIH 2017; IFMBE Proceedings, vol 62, Springer, Singapore, 2017, 3-8

  • [58] LeCun Y., Bottou L., Bengio Y., Haffner P., Gradient-based learning applied to document recognition, Proceedings of the IEEE, 1998, 86(11)

  • [59] Deotte C., Digit Recognizer: Learn computer vision fundamentals with the famous MNIIST data https://www.kaggle.com/c/digit-recognizer/discussion/61480. Accessed 14th April, 2019

OPEN ACCESS

Journal + Issues

Open Computer Science is an open access, peer-reviewed journal. The journal publishes research results in the following fields: algorithms and complexity theory, artificial intelligence, bioinformatics, networking and security systems,
programming languages, system and software engineering, and theoretical foundations of computer science.

Search