Dmaf: data-model anti-forgetting for federated incremental learning

Federated Learning has received much attention due to its data privacy benefits, but most existing approaches assume that client classes are fixed. Clients may remove old classes and add new ones, leading to catastrophic forgetting of the model. Existing methods have limitations, such as requiring a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cluster computing 2025-02, Vol.28 (1), p.30, Article 30
Hauptverfasser: Zhu, Kongshang, Xu, Jiuyun, Zhou, Liang, Li, Xiaowen, Zhao, Yingzhi, Xu, Xiangrui, Li, Shibao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning has received much attention due to its data privacy benefits, but most existing approaches assume that client classes are fixed. Clients may remove old classes and add new ones, leading to catastrophic forgetting of the model. Existing methods have limitations, such as requiring additional client storage and distillation methods becoming less effective as new classes increase. For this reason, this paper proposes the Data-Model Anti-Forgetting (DMAF) framework. Specifically, in the proposed framework, an auxiliary client and group aggregation method to mitigate catastrophic forgetting at the data level has been proposed, which does not require clients to allocate additional storage space to store synthetic data and can balance class distributions. A multi-teacher integrated knowledge distillation method was adopted to retain old class knowledge by distilling multiple teacher models and design task fusion for further tuning of the global model. Finally, this paper conducts extensive experiments on public datasets to validate the effectiveness of DMAF.
ISSN:1386-7857
1573-7543
DOI:10.1007/s10586-024-04697-9