Dynamic Network-Assisted D2D-Aided Coded Distributed Learning

Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by v...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on communications 2023-06, Vol.71 (6), p.1-1
Hauptverfasser: Zeulin, Nikita, Galinina, Olga, Himayat, Nageen, Andreev, Sergey, Heath, Robert W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue 6
container_start_page 1
container_title IEEE transactions on communications
container_volume 71
creator Zeulin, Nikita
Galinina, Olga
Himayat, Nageen
Andreev, Sergey
Heath, Robert W.
description Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by various computation offloading mechanisms that severely impact communication efficiency, especially in large-scale scenarios. To reduce the communication overhead, we leverage device-to-device (D2D) connectivity, which enhances spectrum utilization and allows for efficient data exchange between proximate devices. In particular, we design a novel D2D-aided coded distributed learning method named D2D-CDL for efficient load balancing across devices. The proposed solution captures system dynamics, including data (time-varying learning model, irregular intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (different locations and D2D graph connectivity). To decrease the number of communication rounds, we derive an optimal compression rate, which minimizes the processing time. The resulting optimization problem provides suboptimal compression parameters that improve the total training time. Our proposed method is particularly beneficial for real-time collaborative applications, where users continuously generate training data thus yielding a model drift.
doi_str_mv 10.1109/TCOMM.2023.3259442
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10077407</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10077407</ieee_id><sourcerecordid>2826474677</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-32607b27e71b0ec22749a437f42a2d2a9d808c6203f77e484443729424073dbd3</originalsourceid><addsrcrecordid>eNpNUMtOwzAQtBBIlMIPIA6VOLus104cHzhUSXlILb2Us-UkDnKhSbFTof49Du2By-5qZ2ZHO4TcMpgyBuphna-WyykC8inHRAmBZ2TEkiSjkCXynIwAFNBUyuySXIWwAQABnI_IY3FozdZVkzfb_3T-k85CcKG39aTAgs5cHae8G2oR196V-wFbWONb135ck4vGfAV7c-pj8v40X-cvdLF6fs1nC1qhYj3lmIIsUVrJSrAVohTKCC4bgQZrNKrOIKtSBN5IaUUmRARRCRQgeV3WfEzuj3d3vvve29DrTbf3bbTUmGEqpIivRRYeWZXvQvC20TvvtsYfNAM9xKT_YtJDTPoUUxTdHUXOWvtPAFIO7r-nWmEI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2826474677</pqid></control><display><type>article</type><title>Dynamic Network-Assisted D2D-Aided Coded Distributed Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Zeulin, Nikita ; Galinina, Olga ; Himayat, Nageen ; Andreev, Sergey ; Heath, Robert W.</creator><creatorcontrib>Zeulin, Nikita ; Galinina, Olga ; Himayat, Nageen ; Andreev, Sergey ; Heath, Robert W.</creatorcontrib><description>Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by various computation offloading mechanisms that severely impact communication efficiency, especially in large-scale scenarios. To reduce the communication overhead, we leverage device-to-device (D2D) connectivity, which enhances spectrum utilization and allows for efficient data exchange between proximate devices. In particular, we design a novel D2D-aided coded distributed learning method named D2D-CDL for efficient load balancing across devices. The proposed solution captures system dynamics, including data (time-varying learning model, irregular intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (different locations and D2D graph connectivity). To decrease the number of communication rounds, we derive an optimal compression rate, which minimizes the processing time. The resulting optimization problem provides suboptimal compression parameters that improve the total training time. Our proposed method is particularly beneficial for real-time collaborative applications, where users continuously generate training data thus yielding a model drift.</description><identifier>ISSN: 0090-6778</identifier><identifier>EISSN: 1558-0857</identifier><identifier>DOI: 10.1109/TCOMM.2023.3259442</identifier><identifier>CODEN: IECMBT</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>coded computing ; Collaboration ; Communication ; Computation offloading ; Computational modeling ; Computer aided instruction ; data compression ; Data exchange ; Data models ; Data processing ; Device-to-device communication ; device-to-device communications ; Distance learning ; Graph theory ; Heterogeneity ; load balancing ; Machine learning ; online distributed learning ; Optimization ; Real time ; System dynamics ; Training ; Wireless networks</subject><ispartof>IEEE transactions on communications, 2023-06, Vol.71 (6), p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c291t-32607b27e71b0ec22749a437f42a2d2a9d808c6203f77e484443729424073dbd3</cites><orcidid>0000-0002-4060-9406 ; 0000-0002-4666-5628 ; 0000-0001-8223-3665 ; 0000-0002-3001-8389 ; 0000-0002-5386-1061</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10077407$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids></links><search><creatorcontrib>Zeulin, Nikita</creatorcontrib><creatorcontrib>Galinina, Olga</creatorcontrib><creatorcontrib>Himayat, Nageen</creatorcontrib><creatorcontrib>Andreev, Sergey</creatorcontrib><creatorcontrib>Heath, Robert W.</creatorcontrib><title>Dynamic Network-Assisted D2D-Aided Coded Distributed Learning</title><title>IEEE transactions on communications</title><addtitle>TCOMM</addtitle><description>Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by various computation offloading mechanisms that severely impact communication efficiency, especially in large-scale scenarios. To reduce the communication overhead, we leverage device-to-device (D2D) connectivity, which enhances spectrum utilization and allows for efficient data exchange between proximate devices. In particular, we design a novel D2D-aided coded distributed learning method named D2D-CDL for efficient load balancing across devices. The proposed solution captures system dynamics, including data (time-varying learning model, irregular intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (different locations and D2D graph connectivity). To decrease the number of communication rounds, we derive an optimal compression rate, which minimizes the processing time. The resulting optimization problem provides suboptimal compression parameters that improve the total training time. Our proposed method is particularly beneficial for real-time collaborative applications, where users continuously generate training data thus yielding a model drift.</description><subject>coded computing</subject><subject>Collaboration</subject><subject>Communication</subject><subject>Computation offloading</subject><subject>Computational modeling</subject><subject>Computer aided instruction</subject><subject>data compression</subject><subject>Data exchange</subject><subject>Data models</subject><subject>Data processing</subject><subject>Device-to-device communication</subject><subject>device-to-device communications</subject><subject>Distance learning</subject><subject>Graph theory</subject><subject>Heterogeneity</subject><subject>load balancing</subject><subject>Machine learning</subject><subject>online distributed learning</subject><subject>Optimization</subject><subject>Real time</subject><subject>System dynamics</subject><subject>Training</subject><subject>Wireless networks</subject><issn>0090-6778</issn><issn>1558-0857</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNUMtOwzAQtBBIlMIPIA6VOLus104cHzhUSXlILb2Us-UkDnKhSbFTof49Du2By-5qZ2ZHO4TcMpgyBuphna-WyykC8inHRAmBZ2TEkiSjkCXynIwAFNBUyuySXIWwAQABnI_IY3FozdZVkzfb_3T-k85CcKG39aTAgs5cHae8G2oR196V-wFbWONb135ck4vGfAV7c-pj8v40X-cvdLF6fs1nC1qhYj3lmIIsUVrJSrAVohTKCC4bgQZrNKrOIKtSBN5IaUUmRARRCRQgeV3WfEzuj3d3vvve29DrTbf3bbTUmGEqpIivRRYeWZXvQvC20TvvtsYfNAM9xKT_YtJDTPoUUxTdHUXOWvtPAFIO7r-nWmEI</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Zeulin, Nikita</creator><creator>Galinina, Olga</creator><creator>Himayat, Nageen</creator><creator>Andreev, Sergey</creator><creator>Heath, Robert W.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-4060-9406</orcidid><orcidid>https://orcid.org/0000-0002-4666-5628</orcidid><orcidid>https://orcid.org/0000-0001-8223-3665</orcidid><orcidid>https://orcid.org/0000-0002-3001-8389</orcidid><orcidid>https://orcid.org/0000-0002-5386-1061</orcidid></search><sort><creationdate>20230601</creationdate><title>Dynamic Network-Assisted D2D-Aided Coded Distributed Learning</title><author>Zeulin, Nikita ; Galinina, Olga ; Himayat, Nageen ; Andreev, Sergey ; Heath, Robert W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-32607b27e71b0ec22749a437f42a2d2a9d808c6203f77e484443729424073dbd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>coded computing</topic><topic>Collaboration</topic><topic>Communication</topic><topic>Computation offloading</topic><topic>Computational modeling</topic><topic>Computer aided instruction</topic><topic>data compression</topic><topic>Data exchange</topic><topic>Data models</topic><topic>Data processing</topic><topic>Device-to-device communication</topic><topic>device-to-device communications</topic><topic>Distance learning</topic><topic>Graph theory</topic><topic>Heterogeneity</topic><topic>load balancing</topic><topic>Machine learning</topic><topic>online distributed learning</topic><topic>Optimization</topic><topic>Real time</topic><topic>System dynamics</topic><topic>Training</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zeulin, Nikita</creatorcontrib><creatorcontrib>Galinina, Olga</creatorcontrib><creatorcontrib>Himayat, Nageen</creatorcontrib><creatorcontrib>Andreev, Sergey</creatorcontrib><creatorcontrib>Heath, Robert W.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zeulin, Nikita</au><au>Galinina, Olga</au><au>Himayat, Nageen</au><au>Andreev, Sergey</au><au>Heath, Robert W.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Network-Assisted D2D-Aided Coded Distributed Learning</atitle><jtitle>IEEE transactions on communications</jtitle><stitle>TCOMM</stitle><date>2023-06-01</date><risdate>2023</risdate><volume>71</volume><issue>6</issue><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0090-6778</issn><eissn>1558-0857</eissn><coden>IECMBT</coden><abstract>Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by various computation offloading mechanisms that severely impact communication efficiency, especially in large-scale scenarios. To reduce the communication overhead, we leverage device-to-device (D2D) connectivity, which enhances spectrum utilization and allows for efficient data exchange between proximate devices. In particular, we design a novel D2D-aided coded distributed learning method named D2D-CDL for efficient load balancing across devices. The proposed solution captures system dynamics, including data (time-varying learning model, irregular intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (different locations and D2D graph connectivity). To decrease the number of communication rounds, we derive an optimal compression rate, which minimizes the processing time. The resulting optimization problem provides suboptimal compression parameters that improve the total training time. Our proposed method is particularly beneficial for real-time collaborative applications, where users continuously generate training data thus yielding a model drift.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCOMM.2023.3259442</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-4060-9406</orcidid><orcidid>https://orcid.org/0000-0002-4666-5628</orcidid><orcidid>https://orcid.org/0000-0001-8223-3665</orcidid><orcidid>https://orcid.org/0000-0002-3001-8389</orcidid><orcidid>https://orcid.org/0000-0002-5386-1061</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0090-6778
ispartof IEEE transactions on communications, 2023-06, Vol.71 (6), p.1-1
issn 0090-6778
1558-0857
language eng
recordid cdi_ieee_primary_10077407
source IEEE Electronic Library (IEL)
subjects coded computing
Collaboration
Communication
Computation offloading
Computational modeling
Computer aided instruction
data compression
Data exchange
Data models
Data processing
Device-to-device communication
device-to-device communications
Distance learning
Graph theory
Heterogeneity
load balancing
Machine learning
online distributed learning
Optimization
Real time
System dynamics
Training
Wireless networks
title Dynamic Network-Assisted D2D-Aided Coded Distributed Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T23%3A24%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Network-Assisted%20D2D-Aided%20Coded%20Distributed%20Learning&rft.jtitle=IEEE%20transactions%20on%20communications&rft.au=Zeulin,%20Nikita&rft.date=2023-06-01&rft.volume=71&rft.issue=6&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0090-6778&rft.eissn=1558-0857&rft.coden=IECMBT&rft_id=info:doi/10.1109/TCOMM.2023.3259442&rft_dat=%3Cproquest_ieee_%3E2826474677%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2826474677&rft_id=info:pmid/&rft_ieee_id=10077407&rfr_iscdi=true