Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory
Split learning is a privacy-preserving distributed learning paradigm in which an ML model (e.g., a neural network) is split into two parts (i.e., an encoder and a decoder). The encoder shares so-called latent representation, rather than raw data, for model training. In mobile-edge computing, network...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Alhussein, Omar Wei, Moshi Akhavain, Arashmid |
description | Split learning is a privacy-preserving distributed learning paradigm in which
an ML model (e.g., a neural network) is split into two parts (i.e., an encoder
and a decoder). The encoder shares so-called latent representation, rather than
raw data, for model training. In mobile-edge computing, network functions (such
as traffic forecasting) can be trained via split learning where an encoder
resides in a user equipment (UE) and a decoder resides in the edge network.
Based on the data processing inequality and the information bottleneck (IB)
theory, we present a new framework and training mechanism to enable a dynamic
balancing of the transmission resource consumption with the informativeness of
the shared latent representations, which directly impacts the predictive
performance. The proposed training mechanism offers an encoder-decoder neural
network architecture featuring multiple modes of complexity-relevance
tradeoffs, enabling tunable performance. The adaptability can accommodate
varying real-time network conditions and application requirements, potentially
reducing operational expenditure and enhancing network agility. As a proof of
concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled
throughput prediction problem. We also offer new insights and highlight some
challenges related to recurrent neural networks from the perspective of the IB
theory. Interestingly, we find a compression phenomenon across the temporal
domain of the sequential model, in addition to the compression phase that
occurs with the number of training epochs. |
doi_str_mv | 10.48550/arxiv.2309.02787 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_02787</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_02787</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-f865891c5c8bff8d0349a42160f30c8432d97728005f9af95b697f850cd62d633</originalsourceid><addsrcrecordid>eNpNkLtOwzAYhb0woMIDMOEXSHDi-MYGaYBKQQxkjxxfgkViRyZUZOXJadoOTOc_-j-d4QPgJkNpwQlBdzL-uH2aYyRSlDPOLsHvdvFydApWXgXtfA-l13BrziVYuPM2xFHOLnh4uOD7NLgZ1kZGvxLOw9fQucEkle4NLMM4fc-Hx_0B2Zso-xX6v_EY5nkw3qhP2HyYEJcrcGHl8GWuz7kBzVPVlC9J_fa8Kx_qRFLGEssp4SJTRPHOWq4RLoQs8owii5HiBc61YCznCBErpBWko4JZTpDSNNcU4w24Pc0eJbRTdKOMS7vKaI8y8B-nXVsZ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory</title><source>arXiv.org</source><creator>Alhussein, Omar ; Wei, Moshi ; Akhavain, Arashmid</creator><creatorcontrib>Alhussein, Omar ; Wei, Moshi ; Akhavain, Arashmid</creatorcontrib><description>Split learning is a privacy-preserving distributed learning paradigm in which
an ML model (e.g., a neural network) is split into two parts (i.e., an encoder
and a decoder). The encoder shares so-called latent representation, rather than
raw data, for model training. In mobile-edge computing, network functions (such
as traffic forecasting) can be trained via split learning where an encoder
resides in a user equipment (UE) and a decoder resides in the edge network.
Based on the data processing inequality and the information bottleneck (IB)
theory, we present a new framework and training mechanism to enable a dynamic
balancing of the transmission resource consumption with the informativeness of
the shared latent representations, which directly impacts the predictive
performance. The proposed training mechanism offers an encoder-decoder neural
network architecture featuring multiple modes of complexity-relevance
tradeoffs, enabling tunable performance. The adaptability can accommodate
varying real-time network conditions and application requirements, potentially
reducing operational expenditure and enhancing network agility. As a proof of
concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled
throughput prediction problem. We also offer new insights and highlight some
challenges related to recurrent neural networks from the perspective of the IB
theory. Interestingly, we find a compression phenomenon across the temporal
domain of the sequential model, in addition to the compression phase that
occurs with the number of training epochs.</description><identifier>DOI: 10.48550/arxiv.2309.02787</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Networking and Internet Architecture</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.02787$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.02787$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Alhussein, Omar</creatorcontrib><creatorcontrib>Wei, Moshi</creatorcontrib><creatorcontrib>Akhavain, Arashmid</creatorcontrib><title>Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory</title><description>Split learning is a privacy-preserving distributed learning paradigm in which
an ML model (e.g., a neural network) is split into two parts (i.e., an encoder
and a decoder). The encoder shares so-called latent representation, rather than
raw data, for model training. In mobile-edge computing, network functions (such
as traffic forecasting) can be trained via split learning where an encoder
resides in a user equipment (UE) and a decoder resides in the edge network.
Based on the data processing inequality and the information bottleneck (IB)
theory, we present a new framework and training mechanism to enable a dynamic
balancing of the transmission resource consumption with the informativeness of
the shared latent representations, which directly impacts the predictive
performance. The proposed training mechanism offers an encoder-decoder neural
network architecture featuring multiple modes of complexity-relevance
tradeoffs, enabling tunable performance. The adaptability can accommodate
varying real-time network conditions and application requirements, potentially
reducing operational expenditure and enhancing network agility. As a proof of
concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled
throughput prediction problem. We also offer new insights and highlight some
challenges related to recurrent neural networks from the perspective of the IB
theory. Interestingly, we find a compression phenomenon across the temporal
domain of the sequential model, in addition to the compression phase that
occurs with the number of training epochs.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Networking and Internet Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNkLtOwzAYhb0woMIDMOEXSHDi-MYGaYBKQQxkjxxfgkViRyZUZOXJadoOTOc_-j-d4QPgJkNpwQlBdzL-uH2aYyRSlDPOLsHvdvFydApWXgXtfA-l13BrziVYuPM2xFHOLnh4uOD7NLgZ1kZGvxLOw9fQucEkle4NLMM4fc-Hx_0B2Zso-xX6v_EY5nkw3qhP2HyYEJcrcGHl8GWuz7kBzVPVlC9J_fa8Kx_qRFLGEssp4SJTRPHOWq4RLoQs8owii5HiBc61YCznCBErpBWko4JZTpDSNNcU4w24Pc0eJbRTdKOMS7vKaI8y8B-nXVsZ</recordid><startdate>20230906</startdate><enddate>20230906</enddate><creator>Alhussein, Omar</creator><creator>Wei, Moshi</creator><creator>Akhavain, Arashmid</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230906</creationdate><title>Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory</title><author>Alhussein, Omar ; Wei, Moshi ; Akhavain, Arashmid</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-f865891c5c8bff8d0349a42160f30c8432d97728005f9af95b697f850cd62d633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Networking and Internet Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Alhussein, Omar</creatorcontrib><creatorcontrib>Wei, Moshi</creatorcontrib><creatorcontrib>Akhavain, Arashmid</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Alhussein, Omar</au><au>Wei, Moshi</au><au>Akhavain, Arashmid</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory</atitle><date>2023-09-06</date><risdate>2023</risdate><abstract>Split learning is a privacy-preserving distributed learning paradigm in which
an ML model (e.g., a neural network) is split into two parts (i.e., an encoder
and a decoder). The encoder shares so-called latent representation, rather than
raw data, for model training. In mobile-edge computing, network functions (such
as traffic forecasting) can be trained via split learning where an encoder
resides in a user equipment (UE) and a decoder resides in the edge network.
Based on the data processing inequality and the information bottleneck (IB)
theory, we present a new framework and training mechanism to enable a dynamic
balancing of the transmission resource consumption with the informativeness of
the shared latent representations, which directly impacts the predictive
performance. The proposed training mechanism offers an encoder-decoder neural
network architecture featuring multiple modes of complexity-relevance
tradeoffs, enabling tunable performance. The adaptability can accommodate
varying real-time network conditions and application requirements, potentially
reducing operational expenditure and enhancing network agility. As a proof of
concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled
throughput prediction problem. We also offer new insights and highlight some
challenges related to recurrent neural networks from the perspective of the IB
theory. Interestingly, we find a compression phenomenon across the temporal
domain of the sequential model, in addition to the compression phase that
occurs with the number of training epochs.</abstract><doi>10.48550/arxiv.2309.02787</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2309.02787 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2309_02787 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Networking and Internet Architecture |
title | Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T01%3A36%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Encoding%20and%20Decoding%20of%20Information%20for%20Split%20Learning%20in%20Mobile-Edge%20Computing:%20Leveraging%20Information%20Bottleneck%20Theory&rft.au=Alhussein,%20Omar&rft.date=2023-09-06&rft_id=info:doi/10.48550/arxiv.2309.02787&rft_dat=%3Carxiv_GOX%3E2309_02787%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |