Applied Federated Learning: Architectural Design for Robust and Efficient Learning in Privacy Aware Settings
The classical machine learning paradigm requires the aggregation of user data in a central location where machine learning practitioners can preprocess data, calculate features, tune models and evaluate performance. The advantage of this approach includes leveraging high performance hardware (such a...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The classical machine learning paradigm requires the aggregation of user data
in a central location where machine learning practitioners can preprocess data,
calculate features, tune models and evaluate performance. The advantage of this
approach includes leveraging high performance hardware (such as GPUs) and the
ability of machine learning practitioners to do in depth data analysis to
improve model performance. However, these advantages may come at a cost to data
privacy. User data is collected, aggregated, and stored on centralized servers
for model development. Centralization of data poses risks, including a
heightened risk of internal and external security incidents as well as
accidental data misuse. Federated learning with differential privacy is
designed to avoid the server-side centralization pitfall by bringing the ML
learning step to users' devices. Learning is done in a federated manner where
each mobile device runs a training loop on a local copy of a model. Updates
from on-device models are sent to the server via encrypted communication and
through differential privacy to improve the global model. In this paradigm,
users' personal data remains on their devices. Surprisingly, model training in
this manner comes at a fairly minimal degradation in model performance.
However, federated learning comes with many other challenges due to its
distributed nature, heterogeneous compute environments and lack of data
visibility. This paper explores those challenges and outlines an architectural
design solution we are exploring and testing to productionize federated
learning at Meta scale. |
---|---|
DOI: | 10.48550/arxiv.2206.00807 |