Highly efficient federated learning with strong privacy preservation in cloud computing

Federated learning is a new machine learning framework that allows mutually distrusting clients to reap the benefits from the joint training model without explicitly disclosing their private datasets. However, the high communication cost between the cloud server and clients has become the main chall...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & security 2020-09, Vol.96, p.101889, Article 101889
Hauptverfasser: Fang, Chen, Guo, Yuanbo, Wang, Na, Ju, Ankang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning is a new machine learning framework that allows mutually distrusting clients to reap the benefits from the joint training model without explicitly disclosing their private datasets. However, the high communication cost between the cloud server and clients has become the main challenge due to the limited network bandwidth. Moreover, the model parameters it shares may be utilized to perform model inversion attacks. Aimed at these problems, a new scheme for highly efficient federated learning with strong privacy preservation in cloud computing is presented. We design a lightweight encryption protocol to provide provably privacy preservation while maintaining desirable model utility. Additionally, an efficient optimization strategy is employed to enhance the training efficiency. Under the defined threat model, we prove the proposed scheme is secure against the honest-but-curious server and extreme collusion. We evaluate the effectiveness of our scheme and compare it with existing related works on MNIST and UCI Human Activity Recognition Dataset. Results show that our scheme reduces the execution time by 20% and transmitted ciphertext size by 85% on average while achieving similar accuracy as the compared secure multiparty computation (SMC) based methods.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2020.101889