A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack

Deep learning is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious risk of information leakage based on the deep convolutional generation against the network's p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on computational biology and bioinformatics 2021-05, Vol.18 (3), p.871-881
Hauptverfasser: Yan, Xiaodan, Cui, Baojiang, Xu, Yang, Shi, Peilin, Wang, Ziqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious risk of information leakage based on the deep convolutional generation against the network's privacy protection method. Moreover, the risk of such information leakage is greater in the medical field. This paper proposes a deep convolution generative adversarial networks (DCGAN) based privacy protection method to protect the information of collaborative deep learning training and enhance its stability. The proposed method adopts encrypted transmission in the process of deep network parameter transmission. By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected.
ISSN:1545-5963
1557-9964
DOI:10.1109/TCBB.2019.2940583