DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT

The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2024-02, Vol.11 (4), p.6199-6216
Hauptverfasser: Samikwa, Eric, Maio, Antonio Di, Braun, Torsten
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible fine-grained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this article, we propose distributed micro-split deep learning in heterogeneous dynamic IoT (DISNET). DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices' computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to 5.2\times and 6\times , respectively, compared to two state-of-the-art schemes without loss of accuracy.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3313514