Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge
This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challeng...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on intelligent transportation systems 2021-01, Vol.22 (1), p.107-118 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 118 |
---|---|
container_issue | 1 |
container_start_page | 107 |
container_title | IEEE transactions on intelligent transportation systems |
container_volume | 22 |
creator | Singla, Abhik Padakandla, Sindhu Bhatnagar, Shalabh |
description | This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV. |
doi_str_mv | 10.1109/TITS.2019.2954952 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2473267840</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8917687</ieee_id><sourcerecordid>2473267840</sourcerecordid><originalsourceid>FETCH-LOGICAL-c384t-3043378366b5782631307eb1098ebb51272fa306cd5b7c6c2dfea6a5f6ab211b3</originalsourceid><addsrcrecordid>eNo9kF9PwjAUxRejiYh-AONLE5-H_bO22yMiKhFDoqCPS7vdYQlrsR0Yvr2bEJ_uzck55-b-ouia4AEhOLubT-bvA4pJNqAZTzJOT6Ie4TyNMSbitNtpEmeY4_PoIoRVqyackF5kXqF2fh_fqwAlegDYoDcwtnK-gBpsg6agvDV2iVoJzXRoVLEGNNw5UypbADIWLYYf6NM0X2hqatO0NWO7M97Zv_yLdT9rKJdwGZ1Vah3g6jj70eJxPB89x9PZ02Q0nMYFS5MmZjhhTKZMCM1lSgUjDEvQ7Y8paM0JlbRSDIui5FoWoqBlBUooXgmlKSGa9aPbQ-_Gu-8thCZfua237cmcJpJRIdMEty5ycBXeheChyjfe1Mrvc4LzjmjeEc07ovmRaJu5OWQMAPz704xIkUr2C7wrccg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2473267840</pqid></control><display><type>article</type><title>Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge</title><source>IEEE Electronic Library (IEL)</source><creator>Singla, Abhik ; Padakandla, Sindhu ; Bhatnagar, Shalabh</creator><creatorcontrib>Singla, Abhik ; Padakandla, Sindhu ; Bhatnagar, Shalabh</creatorcontrib><description>This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2019.2954952</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Cameras ; Collision avoidance ; Collisions ; Control methods ; Decision making ; Deep learning ; deep Q-networks (DQN) ; deep reinforcement learning (DRL) ; Indoor environments ; Navigation ; Obstacle avoidance ; partial observability ; Recurrent neural networks ; Robots ; Simultaneous localization and mapping ; Unmanned aerial vehicle (UAV) obstacle avoidance (OA) ; Unmanned aerial vehicles ; Visualization</subject><ispartof>IEEE transactions on intelligent transportation systems, 2021-01, Vol.22 (1), p.107-118</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c384t-3043378366b5782631307eb1098ebb51272fa306cd5b7c6c2dfea6a5f6ab211b3</citedby><cites>FETCH-LOGICAL-c384t-3043378366b5782631307eb1098ebb51272fa306cd5b7c6c2dfea6a5f6ab211b3</cites><orcidid>0000-0003-3385-294X ; 0000-0001-7846-7610 ; 0000-0001-7644-3914</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8917687$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8917687$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Singla, Abhik</creatorcontrib><creatorcontrib>Padakandla, Sindhu</creatorcontrib><creatorcontrib>Bhatnagar, Shalabh</creatorcontrib><title>Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV.</description><subject>Cameras</subject><subject>Collision avoidance</subject><subject>Collisions</subject><subject>Control methods</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>deep Q-networks (DQN)</subject><subject>deep reinforcement learning (DRL)</subject><subject>Indoor environments</subject><subject>Navigation</subject><subject>Obstacle avoidance</subject><subject>partial observability</subject><subject>Recurrent neural networks</subject><subject>Robots</subject><subject>Simultaneous localization and mapping</subject><subject>Unmanned aerial vehicle (UAV) obstacle avoidance (OA)</subject><subject>Unmanned aerial vehicles</subject><subject>Visualization</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kF9PwjAUxRejiYh-AONLE5-H_bO22yMiKhFDoqCPS7vdYQlrsR0Yvr2bEJ_uzck55-b-ouia4AEhOLubT-bvA4pJNqAZTzJOT6Ie4TyNMSbitNtpEmeY4_PoIoRVqyackF5kXqF2fh_fqwAlegDYoDcwtnK-gBpsg6agvDV2iVoJzXRoVLEGNNw5UypbADIWLYYf6NM0X2hqatO0NWO7M97Zv_yLdT9rKJdwGZ1Vah3g6jj70eJxPB89x9PZ02Q0nMYFS5MmZjhhTKZMCM1lSgUjDEvQ7Y8paM0JlbRSDIui5FoWoqBlBUooXgmlKSGa9aPbQ-_Gu-8thCZfua237cmcJpJRIdMEty5ycBXeheChyjfe1Mrvc4LzjmjeEc07ovmRaJu5OWQMAPz704xIkUr2C7wrccg</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Singla, Abhik</creator><creator>Padakandla, Sindhu</creator><creator>Bhatnagar, Shalabh</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-3385-294X</orcidid><orcidid>https://orcid.org/0000-0001-7846-7610</orcidid><orcidid>https://orcid.org/0000-0001-7644-3914</orcidid></search><sort><creationdate>202101</creationdate><title>Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge</title><author>Singla, Abhik ; Padakandla, Sindhu ; Bhatnagar, Shalabh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c384t-3043378366b5782631307eb1098ebb51272fa306cd5b7c6c2dfea6a5f6ab211b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cameras</topic><topic>Collision avoidance</topic><topic>Collisions</topic><topic>Control methods</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>deep Q-networks (DQN)</topic><topic>deep reinforcement learning (DRL)</topic><topic>Indoor environments</topic><topic>Navigation</topic><topic>Obstacle avoidance</topic><topic>partial observability</topic><topic>Recurrent neural networks</topic><topic>Robots</topic><topic>Simultaneous localization and mapping</topic><topic>Unmanned aerial vehicle (UAV) obstacle avoidance (OA)</topic><topic>Unmanned aerial vehicles</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Singla, Abhik</creatorcontrib><creatorcontrib>Padakandla, Sindhu</creatorcontrib><creatorcontrib>Bhatnagar, Shalabh</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Singla, Abhik</au><au>Padakandla, Sindhu</au><au>Bhatnagar, Shalabh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2021-01</date><risdate>2021</risdate><volume>22</volume><issue>1</issue><spage>107</spage><epage>118</epage><pages>107-118</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2019.2954952</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-3385-294X</orcidid><orcidid>https://orcid.org/0000-0001-7846-7610</orcidid><orcidid>https://orcid.org/0000-0001-7644-3914</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1524-9050 |
ispartof | IEEE transactions on intelligent transportation systems, 2021-01, Vol.22 (1), p.107-118 |
issn | 1524-9050 1558-0016 |
language | eng |
recordid | cdi_proquest_journals_2473267840 |
source | IEEE Electronic Library (IEL) |
subjects | Cameras Collision avoidance Collisions Control methods Decision making Deep learning deep Q-networks (DQN) deep reinforcement learning (DRL) Indoor environments Navigation Obstacle avoidance partial observability Recurrent neural networks Robots Simultaneous localization and mapping Unmanned aerial vehicle (UAV) obstacle avoidance (OA) Unmanned aerial vehicles Visualization |
title | Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T13%3A05%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Memory-Based%20Deep%20Reinforcement%20Learning%20for%20Obstacle%20Avoidance%20in%20UAV%20With%20Limited%20Environment%20Knowledge&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Singla,%20Abhik&rft.date=2021-01&rft.volume=22&rft.issue=1&rft.spage=107&rft.epage=118&rft.pages=107-118&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2019.2954952&rft_dat=%3Cproquest_RIE%3E2473267840%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2473267840&rft_id=info:pmid/&rft_ieee_id=8917687&rfr_iscdi=true |