Communication-Efficient and Private Federated Learning with Adaptive Sparsity-Based Pruning on Edge Computing
As data-driven deep learning (DL) has been applied in various scenarios, the privacy threats have become a widely recognized problem. To boost privacy protection in federated learning (FL), some methods adopt a one-shot differential privacy (DP) approach to obfuscate model updates, yet they do not t...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2024-09, Vol.13 (17), p.3435 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As data-driven deep learning (DL) has been applied in various scenarios, the privacy threats have become a widely recognized problem. To boost privacy protection in federated learning (FL), some methods adopt a one-shot differential privacy (DP) approach to obfuscate model updates, yet they do not take into account the dynamic balance between efficiency and privacy protection. To this end, we propose ASPFL—an efficient FL approach with adaptive sparsity-based pruning and differential privacy protection. We further propose the adaptive pruning mechanism by utilizing the Jensen-Shannon divergence as the metric to generate sparse matrices, which are then employed in the model updates. In addition, we introduce adaptive Gaussian noise by assessing the variation of sensitivity through post-pruning uploading. Extensive experiments validate that our proposed ASPFL boosts convergence speed by more than two times under non-IID data. Compared with existing DP-FL methods, ASPFL can maximally achieve over 82% accuracy on CIFAR-10, while the communication cost is greatly reduced by 40% under the same level of privacy protection. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics13173435 |