Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks
The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel c...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-10 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Matsubara, Yoshitomo Levorato, Marco |
description | The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. The code and trained models are available at https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors . |
doi_str_mv | 10.48550/arxiv.2007.15818 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2007_15818</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2429942422</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1002-ab5a85a0ae6f7b76681942aaebaccadb9769d349cfe62769fa0b71ea4cd33bf3</originalsourceid><addsrcrecordid>eNotkEtPwzAQhC0kJKrSH8AJS5xT_IjzOKLQUqSqlVDv0TreFJc0KbbL49_jtpxmpZ3ZHX2E3HE2TQul2CO4H_s1FYzlU64KXlyRkZCSJ0UqxA2ZeL9jjIksF0rJETErPDroaDXsDw69t0NPoTd0bruAzvZb2g6OzswWE4hbH9DQN4QuCXaPdK132AT6jCHKKWp7Wr1D12G_jcYVhu_Bffhbct1C53Hyr2Oymc821SJZrl9eq6dlAjxWSkArKBQwwKzNdZ5lBS9TAYAamgaMLvOsNDItmxYzEecWmM45QtoYKXUrx-T-cvaMoD44uwf3W59Q1GcU0fFwcRzc8HlEH-rdcHR97FSLVJTxW4Qk_wBEd2L7</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2429942422</pqid></control><display><type>article</type><title>Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks</title><source>arXiv.org</source><source>Open Access: Freely Accessible Journals by multiple vendors</source><creator>Matsubara, Yoshitomo ; Levorato, Marco</creator><creatorcontrib>Matsubara, Yoshitomo ; Levorato, Marco</creatorcontrib><description>The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. The code and trained models are available at https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors .</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2007.15818</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Data analysis ; Edge computing ; Electronic devices ; Neural networks ; Object recognition ; Pictures ; Servers ; Wireless networks</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1002-ab5a85a0ae6f7b76681942aaebaccadb9769d349cfe62769fa0b71ea4cd33bf3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27924</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2007.15818$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/ICPR48806.2021.9412388$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Matsubara, Yoshitomo</creatorcontrib><creatorcontrib>Levorato, Marco</creatorcontrib><title>Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks</title><title>arXiv.org</title><description>The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. The code and trained models are available at https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors .</description><subject>Artificial neural networks</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Data analysis</subject><subject>Edge computing</subject><subject>Electronic devices</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Pictures</subject><subject>Servers</subject><subject>Wireless networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkEtPwzAQhC0kJKrSH8AJS5xT_IjzOKLQUqSqlVDv0TreFJc0KbbL49_jtpxmpZ3ZHX2E3HE2TQul2CO4H_s1FYzlU64KXlyRkZCSJ0UqxA2ZeL9jjIksF0rJETErPDroaDXsDw69t0NPoTd0bruAzvZb2g6OzswWE4hbH9DQN4QuCXaPdK132AT6jCHKKWp7Wr1D12G_jcYVhu_Bffhbct1C53Hyr2Oymc821SJZrl9eq6dlAjxWSkArKBQwwKzNdZ5lBS9TAYAamgaMLvOsNDItmxYzEecWmM45QtoYKXUrx-T-cvaMoD44uwf3W59Q1GcU0fFwcRzc8HlEH-rdcHR97FSLVJTxW4Qk_wBEd2L7</recordid><startdate>20201018</startdate><enddate>20201018</enddate><creator>Matsubara, Yoshitomo</creator><creator>Levorato, Marco</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201018</creationdate><title>Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks</title><author>Matsubara, Yoshitomo ; Levorato, Marco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1002-ab5a85a0ae6f7b76681942aaebaccadb9769d349cfe62769fa0b71ea4cd33bf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Data analysis</topic><topic>Edge computing</topic><topic>Electronic devices</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Pictures</topic><topic>Servers</topic><topic>Wireless networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Matsubara, Yoshitomo</creatorcontrib><creatorcontrib>Levorato, Marco</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Matsubara, Yoshitomo</au><au>Levorato, Marco</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks</atitle><jtitle>arXiv.org</jtitle><date>2020-10-18</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. The code and trained models are available at https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors .</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2007.15818</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2007_15818 |
source | arXiv.org; Open Access: Freely Accessible Journals by multiple vendors |
subjects | Artificial neural networks Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning Data analysis Edge computing Electronic devices Neural networks Object recognition Pictures Servers Wireless networks |
title | Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T14%3A27%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Compression%20and%20Filtering%20for%20Edge-assisted%20Real-time%20Object%20Detection%20in%20Challenged%20Networks&rft.jtitle=arXiv.org&rft.au=Matsubara,%20Yoshitomo&rft.date=2020-10-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2007.15818&rft_dat=%3Cproquest_arxiv%3E2429942422%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2429942422&rft_id=info:pmid/&rfr_iscdi=true |