FEDERATED LEARNING WITH PARTITIONED AND DYNAMICALLY-SHUFFLED MODEL UPDATES

Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jamjoom, Hani Talal, Eykholt, Kevin, Radhakrishnan, Jayaram Kallapalayam, Valdez, Enriquillo, Cheng, Pau-Chen, Verma, Ashish, Gu, Zhongshu
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each running within its own trusted executed environment (TEE). The TEEs enable confidential and remote-attestable federated aggregation. Preferably, each aggregator entity runs within an encrypted virtual machine that support runtime in-memory encryption. Each party remotely authenticates the TEE before participating in the training. By using multiple decentralized aggregators, parties are enabled to partition their respective model updates at model-parameter granularity, and can map single weights to a specific aggregator entity. Parties also can dynamically shuffle fragmentary model updates at each training iteration to further obfuscate the information dispatched to each aggregator execution entity. This architectural prevents the aggregator from being a single point-of-failure, and serves to protect the model even if all aggregators are compromised.