FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data. The concern about privacy leakage, albeit demonstrated under specific conditions, has triggered numerous follow-up research in de...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) has emerged as a collaborative approach that allows
multiple clients to jointly learn a machine learning model without sharing
their private data. The concern about privacy leakage, albeit demonstrated
under specific conditions, has triggered numerous follow-up research in
designing powerful attacking methods and effective defending mechanisms aiming
to thwart these attacking methods. Nevertheless, privacy-preserving mechanisms
employed in these defending methods invariably lead to compromised model
performances due to a fixed obfuscation applied to private data or gradients.
In this article, we, therefore, propose a novel adaptive obfuscation mechanism,
coined FedAdOb, to protect private data without yielding original model
performances. Technically, FedAdOb utilizes passport-based adaptive obfuscation
to ensure data privacy in both horizontal and vertical federated learning
settings. The privacy-preserving capabilities of FedAdOb, specifically with
regard to private features and labels, are theoretically proven through
Theorems 1 and 2. Furthermore, extensive experimental evaluations conducted on
various datasets and network architectures demonstrate the effectiveness of
FedAdOb by manifesting its superior trade-off between privacy preservation and
model performance, surpassing existing methods. |
---|---|
DOI: | 10.48550/arxiv.2406.01085 |