Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks
Deep-learning-based intelligent services have become prevalent in cyber-physical applications including smart cities and health-care. Collaborative end-edge-cloud computing for deep learning provides a range of performance and efficiency that can address application requirements through computation...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Shahhosseini, Sina Hu, Tianyi Seo, Dongjoo Kanduri, Anil Donyanavard, Bryan Rahmani, Amir M Dutt, Nikil |
description | Deep-learning-based intelligent services have become prevalent in
cyber-physical applications including smart cities and health-care.
Collaborative end-edge-cloud computing for deep learning provides a range of
performance and efficiency that can address application requirements through
computation offloading. The decision to offload computation is a
communication-computation co-optimization problem that varies with both system
parameters (e.g., network condition) and workload characteristics (e.g.,
inputs). Identifying optimal orchestration considering the cross-layer
opportunities and requirements in the face of varying system dynamics is a
challenging multi-dimensional problem. While Reinforcement Learning (RL)
approaches have been proposed earlier, they suffer from a large number of
trial-and-errors during the learning process resulting in excessive time and
resource consumption. We present a Hybrid Learning orchestration framework that
reduces the number of interactions with the system environment by combining
model-based and model-free reinforcement learning. Our Deep Learning inference
orchestration strategy employs reinforcement learning to find the optimal
orchestration policy. Furthermore, we deploy Hybrid Learning (HL) to accelerate
the RL learning process and reduce the number of direct samplings. We
demonstrate efficacy of our HL strategy through experimental comparison with
state-of-the-art RL-based inference orchestration, demonstrating that our HL
strategy accelerates the learning process by up to 166.6x. |
doi_str_mv | 10.48550/arxiv.2202.11098 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_11098</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_11098</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-8992043a40568b40077559a02f3b701fe9e06c9261445bc236baff7ba646b94c3</originalsourceid><addsrcrecordid>eNpFz7FOwzAUBVAvDKjwAUz4BxJeHNuJR1QKrRTo0o0hsp3nYhGc6iUB-veoBYnpSvdKVzqM3RSQy1opuLP0HT9zIUDkRQGmvmSv66Oj2PEGLaWY9jwMxLfk33CcyE6n5gHx8L9vUkDC5JHHxJ_nforZPCLxVbfHzPfD3PEXnL4Geh-v2EWw_YjXf7lgu8fVbrnOmu3TZnnfZFZXdVYbI0CWVoLStZMAVaWUsSBC6SooAhoE7Y3QhZTKeVFqZ0OonNVSOyN9uWC3v7dnXXug-GHp2J6U7VlZ_gCb2kyB</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks</title><source>arXiv.org</source><creator>Shahhosseini, Sina ; Hu, Tianyi ; Seo, Dongjoo ; Kanduri, Anil ; Donyanavard, Bryan ; Rahmani, Amir M ; Dutt, Nikil</creator><creatorcontrib>Shahhosseini, Sina ; Hu, Tianyi ; Seo, Dongjoo ; Kanduri, Anil ; Donyanavard, Bryan ; Rahmani, Amir M ; Dutt, Nikil</creatorcontrib><description>Deep-learning-based intelligent services have become prevalent in
cyber-physical applications including smart cities and health-care.
Collaborative end-edge-cloud computing for deep learning provides a range of
performance and efficiency that can address application requirements through
computation offloading. The decision to offload computation is a
communication-computation co-optimization problem that varies with both system
parameters (e.g., network condition) and workload characteristics (e.g.,
inputs). Identifying optimal orchestration considering the cross-layer
opportunities and requirements in the face of varying system dynamics is a
challenging multi-dimensional problem. While Reinforcement Learning (RL)
approaches have been proposed earlier, they suffer from a large number of
trial-and-errors during the learning process resulting in excessive time and
resource consumption. We present a Hybrid Learning orchestration framework that
reduces the number of interactions with the system environment by combining
model-based and model-free reinforcement learning. Our Deep Learning inference
orchestration strategy employs reinforcement learning to find the optimal
orchestration policy. Furthermore, we deploy Hybrid Learning (HL) to accelerate
the RL learning process and reduce the number of direct samplings. We
demonstrate efficacy of our HL strategy through experimental comparison with
state-of-the-art RL-based inference orchestration, demonstrating that our HL
strategy accelerates the learning process by up to 166.6x.</description><identifier>DOI: 10.48550/arxiv.2202.11098</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.11098$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.11098$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shahhosseini, Sina</creatorcontrib><creatorcontrib>Hu, Tianyi</creatorcontrib><creatorcontrib>Seo, Dongjoo</creatorcontrib><creatorcontrib>Kanduri, Anil</creatorcontrib><creatorcontrib>Donyanavard, Bryan</creatorcontrib><creatorcontrib>Rahmani, Amir M</creatorcontrib><creatorcontrib>Dutt, Nikil</creatorcontrib><title>Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks</title><description>Deep-learning-based intelligent services have become prevalent in
cyber-physical applications including smart cities and health-care.
Collaborative end-edge-cloud computing for deep learning provides a range of
performance and efficiency that can address application requirements through
computation offloading. The decision to offload computation is a
communication-computation co-optimization problem that varies with both system
parameters (e.g., network condition) and workload characteristics (e.g.,
inputs). Identifying optimal orchestration considering the cross-layer
opportunities and requirements in the face of varying system dynamics is a
challenging multi-dimensional problem. While Reinforcement Learning (RL)
approaches have been proposed earlier, they suffer from a large number of
trial-and-errors during the learning process resulting in excessive time and
resource consumption. We present a Hybrid Learning orchestration framework that
reduces the number of interactions with the system environment by combining
model-based and model-free reinforcement learning. Our Deep Learning inference
orchestration strategy employs reinforcement learning to find the optimal
orchestration policy. Furthermore, we deploy Hybrid Learning (HL) to accelerate
the RL learning process and reduce the number of direct samplings. We
demonstrate efficacy of our HL strategy through experimental comparison with
state-of-the-art RL-based inference orchestration, demonstrating that our HL
strategy accelerates the learning process by up to 166.6x.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpFz7FOwzAUBVAvDKjwAUz4BxJeHNuJR1QKrRTo0o0hsp3nYhGc6iUB-veoBYnpSvdKVzqM3RSQy1opuLP0HT9zIUDkRQGmvmSv66Oj2PEGLaWY9jwMxLfk33CcyE6n5gHx8L9vUkDC5JHHxJ_nforZPCLxVbfHzPfD3PEXnL4Geh-v2EWw_YjXf7lgu8fVbrnOmu3TZnnfZFZXdVYbI0CWVoLStZMAVaWUsSBC6SooAhoE7Y3QhZTKeVFqZ0OonNVSOyN9uWC3v7dnXXug-GHp2J6U7VlZ_gCb2kyB</recordid><startdate>20220221</startdate><enddate>20220221</enddate><creator>Shahhosseini, Sina</creator><creator>Hu, Tianyi</creator><creator>Seo, Dongjoo</creator><creator>Kanduri, Anil</creator><creator>Donyanavard, Bryan</creator><creator>Rahmani, Amir M</creator><creator>Dutt, Nikil</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220221</creationdate><title>Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks</title><author>Shahhosseini, Sina ; Hu, Tianyi ; Seo, Dongjoo ; Kanduri, Anil ; Donyanavard, Bryan ; Rahmani, Amir M ; Dutt, Nikil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-8992043a40568b40077559a02f3b701fe9e06c9261445bc236baff7ba646b94c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shahhosseini, Sina</creatorcontrib><creatorcontrib>Hu, Tianyi</creatorcontrib><creatorcontrib>Seo, Dongjoo</creatorcontrib><creatorcontrib>Kanduri, Anil</creatorcontrib><creatorcontrib>Donyanavard, Bryan</creatorcontrib><creatorcontrib>Rahmani, Amir M</creatorcontrib><creatorcontrib>Dutt, Nikil</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shahhosseini, Sina</au><au>Hu, Tianyi</au><au>Seo, Dongjoo</au><au>Kanduri, Anil</au><au>Donyanavard, Bryan</au><au>Rahmani, Amir M</au><au>Dutt, Nikil</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks</atitle><date>2022-02-21</date><risdate>2022</risdate><abstract>Deep-learning-based intelligent services have become prevalent in
cyber-physical applications including smart cities and health-care.
Collaborative end-edge-cloud computing for deep learning provides a range of
performance and efficiency that can address application requirements through
computation offloading. The decision to offload computation is a
communication-computation co-optimization problem that varies with both system
parameters (e.g., network condition) and workload characteristics (e.g.,
inputs). Identifying optimal orchestration considering the cross-layer
opportunities and requirements in the face of varying system dynamics is a
challenging multi-dimensional problem. While Reinforcement Learning (RL)
approaches have been proposed earlier, they suffer from a large number of
trial-and-errors during the learning process resulting in excessive time and
resource consumption. We present a Hybrid Learning orchestration framework that
reduces the number of interactions with the system environment by combining
model-based and model-free reinforcement learning. Our Deep Learning inference
orchestration strategy employs reinforcement learning to find the optimal
orchestration policy. Furthermore, we deploy Hybrid Learning (HL) to accelerate
the RL learning process and reduce the number of direct samplings. We
demonstrate efficacy of our HL strategy through experimental comparison with
state-of-the-art RL-based inference orchestration, demonstrating that our HL
strategy accelerates the learning process by up to 166.6x.</abstract><doi>10.48550/arxiv.2202.11098</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2202.11098 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2202_11098 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T02%3A11%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hybrid%20Learning%20for%20Orchestrating%20Deep%20Learning%20Inference%20in%20Multi-user%20Edge-cloud%20Networks&rft.au=Shahhosseini,%20Sina&rft.date=2022-02-21&rft_id=info:doi/10.48550/arxiv.2202.11098&rft_dat=%3Carxiv_GOX%3E2202_11098%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |