Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning
Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones is constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural netwo...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.26549-26560 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 26560 |
---|---|
container_issue | |
container_start_page | 26549 |
container_title | IEEE access |
container_volume | 8 |
creator | Anwar, Aqeel Raychowdhury, Arijit |
description | Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones is constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural network for autonomous navigation via value-based Deep Reinforcement Learning for a target algorithmic performance. A library of 3D realistic meta-environments is manually designed using Unreal Gaming Engine and the network is trained end-to-end. These trained meta-weights are then used as initializers to the network in a test environment and fine-tuned for the last few fully connected layers. Variation in drone dynamics and environmental characteristics is carried out to show robustness of the approach. Using NVIDIA GPU profiler, it was shown that the energy consumption and training latency is reduced by 3.7\times and 1.8\times respectively without significant degradation in the performance in terms of average distance traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested on a real environment using DJI Tello drone and similar results were reported. The code for the approach can be found on GitHub: https://github.com/aqeelanwar/DRLwithTL. |
doi_str_mv | 10.1109/ACCESS.2020.2971172 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2020_2971172</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8978577</ieee_id><doaj_id>oai_doaj_org_article_e1c6dbd5a18645e391c5d682f4b78e79</doaj_id><sourcerecordid>2454764999</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-b96a0dba545877272201c628f740a619f12ad33240d28760bce82116feb9a1a33</originalsourceid><addsrcrecordid>eNpNUdtqGzEQXUoKDUm-IC-CPtvR_fJotm4bMAnk8ixmd2eNTCy50jrQv6_cDSbzMsOZc440nKa5ZXTJGHV3q7ZdPz8vOeV0yZ1hzPAvzSVn2i2EEvri0_ytuSllR2vZCilz2RxWxynFtE_HQh7gPWxhCimS9wDkB-KBPGGIY8o97jFOZIOQY4hbUqG6KulYN6RNsUwZQiWshy2ShzRgIa_lRHzJEMuI-Sy9br6O8Fbw5qNfNa8_1y_t78Xm8dd9u9oseknttOicBjp0oKSyxnDDOWW95nY0koJmbmQcBiG4pAO3RtOuR8sZ0yN2DhgIcdXcz75Dgp0_5LCH_NcnCP4_kPLWQ55C_4Yeq_PQDQqY1VKhcKxXg7Z8lJ2xaFz1-j57HXL6c8Qy-V29PNbvey6VNFo6d2KJmdXnVErG8fwqo_6UlJ-T8qek_EdSVXU7qwIinhXWGauMEf8AIXaPOQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2454764999</pqid></control><display><type>article</type><title>Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Anwar, Aqeel ; Raychowdhury, Arijit</creator><creatorcontrib>Anwar, Aqeel ; Raychowdhury, Arijit</creatorcontrib><description>Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones is constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural network for autonomous navigation via value-based Deep Reinforcement Learning for a target algorithmic performance. A library of 3D realistic meta-environments is manually designed using Unreal Gaming Engine and the network is trained end-to-end. These trained meta-weights are then used as initializers to the network in a test environment and fine-tuned for the last few fully connected layers. Variation in drone dynamics and environmental characteristics is carried out to show robustness of the approach. Using NVIDIA GPU profiler, it was shown that the energy consumption and training latency is reduced by 3.7\times and 1.8\times respectively without significant degradation in the performance in terms of average distance traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested on a real environment using DJI Tello drone and similar results were reported. The code for the approach can be found on GitHub: https://github.com/aqeelanwar/DRLwithTL.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2020.2971172</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Autonomous navigation ; Autonomous robots ; Constraints ; Deep learning ; deep reinforcement learning ; drone ; Drones ; Energy consumption ; Machine learning ; Reinforcement learning ; Task analysis ; Training ; transfer learning</subject><ispartof>IEEE access, 2020, Vol.8, p.26549-26560</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-b96a0dba545877272201c628f740a619f12ad33240d28760bce82116feb9a1a33</citedby><cites>FETCH-LOGICAL-c408t-b96a0dba545877272201c628f740a619f12ad33240d28760bce82116feb9a1a33</cites><orcidid>0000-0001-6768-058X ; 0000-0001-8391-0576</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8978577$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Anwar, Aqeel</creatorcontrib><creatorcontrib>Raychowdhury, Arijit</creatorcontrib><title>Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones is constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural network for autonomous navigation via value-based Deep Reinforcement Learning for a target algorithmic performance. A library of 3D realistic meta-environments is manually designed using Unreal Gaming Engine and the network is trained end-to-end. These trained meta-weights are then used as initializers to the network in a test environment and fine-tuned for the last few fully connected layers. Variation in drone dynamics and environmental characteristics is carried out to show robustness of the approach. Using NVIDIA GPU profiler, it was shown that the energy consumption and training latency is reduced by 3.7\times and 1.8\times respectively without significant degradation in the performance in terms of average distance traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested on a real environment using DJI Tello drone and similar results were reported. The code for the approach can be found on GitHub: https://github.com/aqeelanwar/DRLwithTL.</description><subject>Artificial neural networks</subject><subject>Autonomous navigation</subject><subject>Autonomous robots</subject><subject>Constraints</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>drone</subject><subject>Drones</subject><subject>Energy consumption</subject><subject>Machine learning</subject><subject>Reinforcement learning</subject><subject>Task analysis</subject><subject>Training</subject><subject>transfer learning</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUdtqGzEQXUoKDUm-IC-CPtvR_fJotm4bMAnk8ixmd2eNTCy50jrQv6_cDSbzMsOZc440nKa5ZXTJGHV3q7ZdPz8vOeV0yZ1hzPAvzSVn2i2EEvri0_ytuSllR2vZCilz2RxWxynFtE_HQh7gPWxhCimS9wDkB-KBPGGIY8o97jFOZIOQY4hbUqG6KulYN6RNsUwZQiWshy2ShzRgIa_lRHzJEMuI-Sy9br6O8Fbw5qNfNa8_1y_t78Xm8dd9u9oseknttOicBjp0oKSyxnDDOWW95nY0koJmbmQcBiG4pAO3RtOuR8sZ0yN2DhgIcdXcz75Dgp0_5LCH_NcnCP4_kPLWQ55C_4Yeq_PQDQqY1VKhcKxXg7Z8lJ2xaFz1-j57HXL6c8Qy-V29PNbvey6VNFo6d2KJmdXnVErG8fwqo_6UlJ-T8qek_EdSVXU7qwIinhXWGauMEf8AIXaPOQ</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Anwar, Aqeel</creator><creator>Raychowdhury, Arijit</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-6768-058X</orcidid><orcidid>https://orcid.org/0000-0001-8391-0576</orcidid></search><sort><creationdate>2020</creationdate><title>Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning</title><author>Anwar, Aqeel ; Raychowdhury, Arijit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-b96a0dba545877272201c628f740a619f12ad33240d28760bce82116feb9a1a33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Autonomous navigation</topic><topic>Autonomous robots</topic><topic>Constraints</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>drone</topic><topic>Drones</topic><topic>Energy consumption</topic><topic>Machine learning</topic><topic>Reinforcement learning</topic><topic>Task analysis</topic><topic>Training</topic><topic>transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Anwar, Aqeel</creatorcontrib><creatorcontrib>Raychowdhury, Arijit</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Anwar, Aqeel</au><au>Raychowdhury, Arijit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2020</date><risdate>2020</risdate><volume>8</volume><spage>26549</spage><epage>26560</epage><pages>26549-26560</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones is constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural network for autonomous navigation via value-based Deep Reinforcement Learning for a target algorithmic performance. A library of 3D realistic meta-environments is manually designed using Unreal Gaming Engine and the network is trained end-to-end. These trained meta-weights are then used as initializers to the network in a test environment and fine-tuned for the last few fully connected layers. Variation in drone dynamics and environmental characteristics is carried out to show robustness of the approach. Using NVIDIA GPU profiler, it was shown that the energy consumption and training latency is reduced by 3.7\times and 1.8\times respectively without significant degradation in the performance in terms of average distance traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested on a real environment using DJI Tello drone and similar results were reported. The code for the approach can be found on GitHub: https://github.com/aqeelanwar/DRLwithTL.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2020.2971172</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-6768-058X</orcidid><orcidid>https://orcid.org/0000-0001-8391-0576</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2020, Vol.8, p.26549-26560 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_crossref_primary_10_1109_ACCESS_2020_2971172 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Artificial neural networks Autonomous navigation Autonomous robots Constraints Deep learning deep reinforcement learning drone Drones Energy consumption Machine learning Reinforcement learning Task analysis Training transfer learning |
title | Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T10%3A48%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Autonomous%20Navigation%20via%20Deep%20Reinforcement%20Learning%20for%20Resource%20Constraint%20Edge%20Nodes%20Using%20Transfer%20Learning&rft.jtitle=IEEE%20access&rft.au=Anwar,%20Aqeel&rft.date=2020&rft.volume=8&rft.spage=26549&rft.epage=26560&rft.pages=26549-26560&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2020.2971172&rft_dat=%3Cproquest_cross%3E2454764999%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2454764999&rft_id=info:pmid/&rft_ieee_id=8978577&rft_doaj_id=oai_doaj_org_article_e1c6dbd5a18645e391c5d682f4b78e79&rfr_iscdi=true |