DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT
The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, co...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2024-02, Vol.11 (4), p.6199-6216 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6216 |
---|---|
container_issue | 4 |
container_start_page | 6199 |
container_title | IEEE internet of things journal |
container_volume | 11 |
creator | Samikwa, Eric Maio, Antonio Di Braun, Torsten |
description | The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible fine-grained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this article, we propose distributed micro-split deep learning in heterogeneous dynamic IoT (DISNET). DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices' computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to 5.2\times and 6\times , respectively, compared to two state-of-the-art schemes without loss of accuracy. |
doi_str_mv | 10.1109/JIOT.2023.3313514 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JIOT_2023_3313514</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10243578</ieee_id><sourcerecordid>2923119892</sourcerecordid><originalsourceid>FETCH-LOGICAL-c337t-2a35ef9008fc763d6fbd80fcf325df2d5d6e53babeaeccad618683f163e7d6ae3</originalsourceid><addsrcrecordid>eNpNkLFuwjAQhq2qlYooD1Cpg6XOobaPOEm3itASRMtAOltOfEZGkKROMvD2DYKB6b_h_-50HyHPnE05Z8nbKtvkU8EETAE4hHx2R0YCRBTMpBT3N_MjmbTtnjE2YCFP5Iis0mz7s8jfaerazrui79DQb1f6Otg2B9fRFLGha9S-ctWOuoousUNf77DCum9peqr00ZU0q_Mn8mD1ocXJNcfk93ORz5fBevOVzT_WQQkQdYHQEKJNGIttGUkw0hYmZra0IEJjhQmNxBAKXaDGstRG8ljGYLkEjIzUCGPyetnb-Pqvx7ZT-7r31XBSiUQA50k8xJjwS2t4pW09WtV4d9T-pDhTZ2vqbE2dramrtYF5uTAOEW_6YgZhFMM_-jFoNw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2923119892</pqid></control><display><type>article</type><title>DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT</title><source>IEEE Electronic Library (IEL)</source><creator>Samikwa, Eric ; Maio, Antonio Di ; Braun, Torsten</creator><creatorcontrib>Samikwa, Eric ; Maio, Antonio Di ; Braun, Torsten</creatorcontrib><description><![CDATA[The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible fine-grained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this article, we propose distributed micro-split deep learning in heterogeneous dynamic IoT (DISNET). DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices' computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to <inline-formula> <tex-math notation="LaTeX">5.2\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">6\times </tex-math></inline-formula>, respectively, compared to two state-of-the-art schemes without loss of accuracy.]]></description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2023.3313514</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Cloud computing ; Collaboration ; Computation ; Computational modeling ; Data models ; Deep learning ; Distributed machine learning (ML) ; edge computing ; Energy consumption ; Inference ; Internet of Things ; Internet of Things (IoT) ; Machine learning ; micro-split deep learning (DL) ; Network latency ; Neural networks ; Partitioning ; Servers ; Task analysis</subject><ispartof>IEEE internet of things journal, 2024-02, Vol.11 (4), p.6199-6216</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c337t-2a35ef9008fc763d6fbd80fcf325df2d5d6e53babeaeccad618683f163e7d6ae3</citedby><cites>FETCH-LOGICAL-c337t-2a35ef9008fc763d6fbd80fcf325df2d5d6e53babeaeccad618683f163e7d6ae3</cites><orcidid>0000-0001-8062-5083 ; 0000-0001-5968-7108 ; 0000-0001-8495-8926</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10243578$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids></links><search><creatorcontrib>Samikwa, Eric</creatorcontrib><creatorcontrib>Maio, Antonio Di</creatorcontrib><creatorcontrib>Braun, Torsten</creatorcontrib><title>DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description><![CDATA[The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible fine-grained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this article, we propose distributed micro-split deep learning in heterogeneous dynamic IoT (DISNET). DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices' computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to <inline-formula> <tex-math notation="LaTeX">5.2\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">6\times </tex-math></inline-formula>, respectively, compared to two state-of-the-art schemes without loss of accuracy.]]></description><subject>Artificial neural networks</subject><subject>Cloud computing</subject><subject>Collaboration</subject><subject>Computation</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Distributed machine learning (ML)</subject><subject>edge computing</subject><subject>Energy consumption</subject><subject>Inference</subject><subject>Internet of Things</subject><subject>Internet of Things (IoT)</subject><subject>Machine learning</subject><subject>micro-split deep learning (DL)</subject><subject>Network latency</subject><subject>Neural networks</subject><subject>Partitioning</subject><subject>Servers</subject><subject>Task analysis</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkLFuwjAQhq2qlYooD1Cpg6XOobaPOEm3itASRMtAOltOfEZGkKROMvD2DYKB6b_h_-50HyHPnE05Z8nbKtvkU8EETAE4hHx2R0YCRBTMpBT3N_MjmbTtnjE2YCFP5Iis0mz7s8jfaerazrui79DQb1f6Otg2B9fRFLGha9S-ctWOuoousUNf77DCum9peqr00ZU0q_Mn8mD1ocXJNcfk93ORz5fBevOVzT_WQQkQdYHQEKJNGIttGUkw0hYmZra0IEJjhQmNxBAKXaDGstRG8ljGYLkEjIzUCGPyetnb-Pqvx7ZT-7r31XBSiUQA50k8xJjwS2t4pW09WtV4d9T-pDhTZ2vqbE2dramrtYF5uTAOEW_6YgZhFMM_-jFoNw</recordid><startdate>20240215</startdate><enddate>20240215</enddate><creator>Samikwa, Eric</creator><creator>Maio, Antonio Di</creator><creator>Braun, Torsten</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8062-5083</orcidid><orcidid>https://orcid.org/0000-0001-5968-7108</orcidid><orcidid>https://orcid.org/0000-0001-8495-8926</orcidid></search><sort><creationdate>20240215</creationdate><title>DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT</title><author>Samikwa, Eric ; Maio, Antonio Di ; Braun, Torsten</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c337t-2a35ef9008fc763d6fbd80fcf325df2d5d6e53babeaeccad618683f163e7d6ae3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Cloud computing</topic><topic>Collaboration</topic><topic>Computation</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Distributed machine learning (ML)</topic><topic>edge computing</topic><topic>Energy consumption</topic><topic>Inference</topic><topic>Internet of Things</topic><topic>Internet of Things (IoT)</topic><topic>Machine learning</topic><topic>micro-split deep learning (DL)</topic><topic>Network latency</topic><topic>Neural networks</topic><topic>Partitioning</topic><topic>Servers</topic><topic>Task analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Samikwa, Eric</creatorcontrib><creatorcontrib>Maio, Antonio Di</creatorcontrib><creatorcontrib>Braun, Torsten</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Samikwa, Eric</au><au>Maio, Antonio Di</au><au>Braun, Torsten</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2024-02-15</date><risdate>2024</risdate><volume>11</volume><issue>4</issue><spage>6199</spage><epage>6216</epage><pages>6199-6216</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract><![CDATA[The key impediments to deploying deep neural networks (DNNs) in Internet of Things (IoT) edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible fine-grained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this article, we propose distributed micro-split deep learning in heterogeneous dynamic IoT (DISNET). DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices' computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to <inline-formula> <tex-math notation="LaTeX">5.2\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">6\times </tex-math></inline-formula>, respectively, compared to two state-of-the-art schemes without loss of accuracy.]]></abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JIOT.2023.3313514</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-8062-5083</orcidid><orcidid>https://orcid.org/0000-0001-5968-7108</orcidid><orcidid>https://orcid.org/0000-0001-8495-8926</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2327-4662 |
ispartof | IEEE internet of things journal, 2024-02, Vol.11 (4), p.6199-6216 |
issn | 2327-4662 2327-4662 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JIOT_2023_3313514 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Cloud computing Collaboration Computation Computational modeling Data models Deep learning Distributed machine learning (ML) edge computing Energy consumption Inference Internet of Things Internet of Things (IoT) Machine learning micro-split deep learning (DL) Network latency Neural networks Partitioning Servers Task analysis |
title | DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T03%3A58%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DISNET:%20Distributed%20Micro-Split%20Deep%20Learning%20in%20Heterogeneous%20Dynamic%20IoT&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Samikwa,%20Eric&rft.date=2024-02-15&rft.volume=11&rft.issue=4&rft.spage=6199&rft.epage=6216&rft.pages=6199-6216&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2023.3313514&rft_dat=%3Cproquest_cross%3E2923119892%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2923119892&rft_id=info:pmid/&rft_ieee_id=10243578&rfr_iscdi=true |