Lead federated neuromorphic learning for wireless edge artificial intelligence

In order to realize the full potential of wireless edge artificial intelligence (AI), very large and diverse datasets will often be required for energy-demanding model training on resource-constrained edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature communications 2022-07, Vol.13 (1), p.4269-12, Article 4269
Hauptverfasser: Yang, Helin, Lam, Kwok-Yan, Xiao, Liang, Xiong, Zehui, Hu, Hao, Niyato, Dusit, Vincent Poor, H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In order to realize the full potential of wireless edge artificial intelligence (AI), very large and diverse datasets will often be required for energy-demanding model training on resource-constrained edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is a decentralized energy-efficient brain-inspired computing method based on spiking neural networks. The proposed technique will enable edge devices to exploit brain-like biophysiological structure to collaboratively train a global model while helping preserve privacy. Experimental results show that, under the situation of uneven dataset distribution among edge devices, LFNL achieves a comparable recognition accuracy to existing edge AI techniques, while substantially reducing data traffic by >3.5× and computational latency by >2.0×. Furthermore, LFNL significantly reduces energy consumption by >4.5× compared to standard federated learning with a slight accuracy loss up to 1.5%. Therefore, the proposed LFNL can facilitate the development of brain-inspired computing and edge AI. Designing energy-efficient computing solution for the implementation of AI algorithms in edge devices remains a challenge. Yang et al. proposes a decentralized brain-inspired computing method enabling multiple edge devices to collaboratively train a global model without a fixed central coordinator.
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-022-32020-w