Secure and efficient parameters aggregation protocol for federated incremental learning and its applications

Federated Learning (FL) enables the deployment of distributed machine learning models over the cloud and Edge Devices (EDs) while preserving the privacy of sensitive local data, such as electronic health records. However, despite FL advantages regarding security and flexibility, current construction...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of intelligent systems 2022-08, Vol.37 (8), p.4471-4487
Hauptverfasser: Wang, Xiaoying, Liang, Zhiwei, Koe, Arthur Sandor Voundi, Wu, Qingwu, Zhang, Xiaodong, Li, Haitao, Yang, Qintai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning (FL) enables the deployment of distributed machine learning models over the cloud and Edge Devices (EDs) while preserving the privacy of sensitive local data, such as electronic health records. However, despite FL advantages regarding security and flexibility, current constructions still suffer from some limitations. Namely, heavy computation overhead on limited resources EDs, communication overhead in uploading converged local models' parameters to a centralized server for parameters aggregation, and lack of guaranteeing the acquired knowledge preservation in the face of incremental learning over new local data sets. This paper introduces a secure and resource‐friendly protocol for parameters aggregation in federated incremental learning and its applications. In this study, the central server relies on a new method for parameters aggregation called orthogonal gradient aggregation. Such a method assumes constant changes of each local data set and allows updating parameters in the orthogonal direction of previous parameters spaces. As a result, our new construction is robust against catastrophic forgetting, maintains the federated neural network accuracy, and is efficient in computation and communication overhead. Moreover, extensive experiments analysis over several significant data sets for incremental learning demonstrates our new protocol's efficiency, efficacy, and flexibility.
ISSN:0884-8173
1098-111X
DOI:10.1002/int.22727