ExclaveFL: Providing Transparency to Federated Learning using Exclaves
In federated learning (FL), data providers jointly train a model without disclosing their training data. Despite its privacy benefits, a malicious data provider can simply deviate from the correct training protocol without being detected, thus attacking the trained model. While current solutions hav...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In federated learning (FL), data providers jointly train a model without
disclosing their training data. Despite its privacy benefits, a malicious data
provider can simply deviate from the correct training protocol without being
detected, thus attacking the trained model. While current solutions have
explored the use of trusted execution environment (TEEs) to combat such
attacks, there is a mismatch with the security needs of FL: TEEs offer
confidentiality guarantees, which are unnecessary for FL and make them
vulnerable to side-channel attacks, and focus on coarse-grained attestation,
which does not capture the execution of FL training.
We describe ExclaveFL, an FL platform that achieves end-to-end transparency
and integrity for detecting attacks. ExclaveFL achieves this by employing a new
hardware security abstraction, exclaves, which focus on integrity-only
guarantees. ExclaveFL uses exclaves to protect the execution of FL tasks, while
generating signed statements containing fine-grained, hardware-based
attestation reports of task execution at runtime. ExclaveFL then enables
auditing using these statements to construct an attested dataflow graph and
then check that the FL training jobs satisfies claims, such as the absence of
attacks. Our experiments show that ExclaveFL introduces a less than 9% overhead
while detecting a wide-range of attacks. |
---|---|
DOI: | 10.48550/arxiv.2412.10537 |