Improving edge AI for industrial IoT applications with distributed learning using consensus

Internet of Things (IoT) devices produce massive amounts of data in a very short time. Transferring these data to the cloud to be analyzed may be prohibitive for applications that require near real-time processing. One solution to meet such timing requirements is to bring most data processing closer...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Design automation for embedded systems 2024-03, Vol.28 (1), p.67-89
Hauptverfasser: Fidelis, Samuel, Castro, Márcio, Siqueira, Frank
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Internet of Things (IoT) devices produce massive amounts of data in a very short time. Transferring these data to the cloud to be analyzed may be prohibitive for applications that require near real-time processing. One solution to meet such timing requirements is to bring most data processing closer to IoT devices (i.e., to the edge). In this context, the present work proposes a distributed architecture that meets the timing requirements imposed by Industrial IoT (IIoT) applications that need to apply Machine Learning (ML) models with high accuracy and low latency. This is done by dividing the tasks of storing and processing data into different layers—mist, fog, and cloud—using the cloud layer only for the tasks related to long-term storage of summarized data and hosting of necessary reports and dashboards. The proposed architecture employs ML inferences in the edge layer in a distributed fashion, where each edge node is either responsible for applying a different ML technique or the same technique but with a different training data set. Then, a consensus algorithm takes the ML inference results from the edge nodes to decide the result of the inference, thus improving the system’s overall accuracy. Results obtained with two different data sets show that the proposed approach can improve the accuracy of the ML models without significantly compromising the response time.
ISSN:0929-5585
1572-8080
DOI:10.1007/s10617-024-09284-0