Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle

This paper proposes a systematic driving framework where the decision making module of reinforcement learning (RL) is integrated with rapidly-exploring random tree (RRT) as motion planning. RL is used to generate local goals and semantic speed commands to control the longitudinal speed of a vehicle...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-07, Vol.53 (13), p.16473-16486
Hauptverfasser: Yu, Jiaxing, Arab, Aliasghar, Yi, Jingang, Pei, Xiaofei, Guo, Xuexun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 16486
container_issue 13
container_start_page 16473
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 53
creator Yu, Jiaxing
Arab, Aliasghar
Yi, Jingang
Pei, Xiaofei
Guo, Xuexun
description This paper proposes a systematic driving framework where the decision making module of reinforcement learning (RL) is integrated with rapidly-exploring random tree (RRT) as motion planning. RL is used to generate local goals and semantic speed commands to control the longitudinal speed of a vehicle while rewards are designed for the driving safety and the traffic efficiency. Guaranteeing the driving comfort, RRT returns a feasible path to be followed by the vehicle with the speed commands. The scene decomposition approach is implemented to scale the deep neural network (DNN) to environments with multiple traffic participants and double deep Q-networks (DDQN) with prioritized experience replay (PER) is utilized to accelerate the training process. To handle the disturbance of the perception of the agent, we use an ensemble of neural networks to evaluate the uncertainty of decisions. It has shown that the proposed framework can tackle unexpected actions of traffic participants at an intersection yielding safe, comfort and efficient driving behaviors. Also, the ensemble of DDQN with PER is proved to be superior over standard DDQN in terms of learning efficiency and disturbance vulnerability.
doi_str_mv 10.1007/s10489-022-04358-7
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2831893969</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2831893969</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-afb1e609c30f6bcc1bb6e533bbc61b8f6378c6155bfd67ec0c2e247c21de8ea43</originalsourceid><addsrcrecordid>eNp9kMlKxTAUhoMoeB1ewFXAdTRDmzRLEScQ3Ci4C2l6qtW2qSet9_r29lrBnatz-PkH-Ag5EfxMcG7Ok-BZYRmXkvFM5QUzO2QlcqOYyazZJStuZca0ts_75CClN865UlysyOa2AfQYXpvgW1qj72Ad8Z02_Qgv6Memf6Hoh6ZqvxhshjbiovRV7OiIAHTdjK-0AhgoQtPXEQN00I-0BY_91jxL1E9j7GMXp0Q_Yd5q4Yjs1b5NcPx7D8nT9dXj5S27f7i5u7y4Z0EJOzJflwI0t0HxWpchiLLUkCtVlkGLsqi1MsX85XlZV9pA4EGCzEyQooICfKYOyenSO2D8mCCN7i1O2M-TThZKFFZZbWeXXFwBY0oItRuw6Tx-OcHdlrBbCLuZsPsh7MwcUksoDVsqgH_V_6S-AY4wguw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2831893969</pqid></control><display><type>article</type><title>Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle</title><source>SpringerLink Journals</source><creator>Yu, Jiaxing ; Arab, Aliasghar ; Yi, Jingang ; Pei, Xiaofei ; Guo, Xuexun</creator><creatorcontrib>Yu, Jiaxing ; Arab, Aliasghar ; Yi, Jingang ; Pei, Xiaofei ; Guo, Xuexun</creatorcontrib><description>This paper proposes a systematic driving framework where the decision making module of reinforcement learning (RL) is integrated with rapidly-exploring random tree (RRT) as motion planning. RL is used to generate local goals and semantic speed commands to control the longitudinal speed of a vehicle while rewards are designed for the driving safety and the traffic efficiency. Guaranteeing the driving comfort, RRT returns a feasible path to be followed by the vehicle with the speed commands. The scene decomposition approach is implemented to scale the deep neural network (DNN) to environments with multiple traffic participants and double deep Q-networks (DDQN) with prioritized experience replay (PER) is utilized to accelerate the training process. To handle the disturbance of the perception of the agent, we use an ensemble of neural networks to evaluate the uncertainty of decisions. It has shown that the proposed framework can tackle unexpected actions of traffic participants at an intersection yielding safe, comfort and efficient driving behaviors. Also, the ensemble of DDQN with PER is proved to be superior over standard DDQN in terms of learning efficiency and disturbance vulnerability.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-04358-7</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Autonomous vehicles ; Computer Science ; Decision making ; Decision trees ; Deep learning ; Machine learning ; Machines ; Manufacturing ; Mechanical Engineering ; Motion planning ; Neural networks ; Processes ; Vehicle safety</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023-07, Vol.53 (13), p.16473-16486</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-afb1e609c30f6bcc1bb6e533bbc61b8f6378c6155bfd67ec0c2e247c21de8ea43</citedby><cites>FETCH-LOGICAL-c319t-afb1e609c30f6bcc1bb6e533bbc61b8f6378c6155bfd67ec0c2e247c21de8ea43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-04358-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-04358-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Yu, Jiaxing</creatorcontrib><creatorcontrib>Arab, Aliasghar</creatorcontrib><creatorcontrib>Yi, Jingang</creatorcontrib><creatorcontrib>Pei, Xiaofei</creatorcontrib><creatorcontrib>Guo, Xuexun</creatorcontrib><title>Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>This paper proposes a systematic driving framework where the decision making module of reinforcement learning (RL) is integrated with rapidly-exploring random tree (RRT) as motion planning. RL is used to generate local goals and semantic speed commands to control the longitudinal speed of a vehicle while rewards are designed for the driving safety and the traffic efficiency. Guaranteeing the driving comfort, RRT returns a feasible path to be followed by the vehicle with the speed commands. The scene decomposition approach is implemented to scale the deep neural network (DNN) to environments with multiple traffic participants and double deep Q-networks (DDQN) with prioritized experience replay (PER) is utilized to accelerate the training process. To handle the disturbance of the perception of the agent, we use an ensemble of neural networks to evaluate the uncertainty of decisions. It has shown that the proposed framework can tackle unexpected actions of traffic participants at an intersection yielding safe, comfort and efficient driving behaviors. Also, the ensemble of DDQN with PER is proved to be superior over standard DDQN in terms of learning efficiency and disturbance vulnerability.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Autonomous vehicles</subject><subject>Computer Science</subject><subject>Decision making</subject><subject>Decision trees</subject><subject>Deep learning</subject><subject>Machine learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Motion planning</subject><subject>Neural networks</subject><subject>Processes</subject><subject>Vehicle safety</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kMlKxTAUhoMoeB1ewFXAdTRDmzRLEScQ3Ci4C2l6qtW2qSet9_r29lrBnatz-PkH-Ag5EfxMcG7Ok-BZYRmXkvFM5QUzO2QlcqOYyazZJStuZca0ts_75CClN865UlysyOa2AfQYXpvgW1qj72Ad8Z02_Qgv6Memf6Hoh6ZqvxhshjbiovRV7OiIAHTdjK-0AhgoQtPXEQN00I-0BY_91jxL1E9j7GMXp0Q_Yd5q4Yjs1b5NcPx7D8nT9dXj5S27f7i5u7y4Z0EJOzJflwI0t0HxWpchiLLUkCtVlkGLsqi1MsX85XlZV9pA4EGCzEyQooICfKYOyenSO2D8mCCN7i1O2M-TThZKFFZZbWeXXFwBY0oItRuw6Tx-OcHdlrBbCLuZsPsh7MwcUksoDVsqgH_V_6S-AY4wguw</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Yu, Jiaxing</creator><creator>Arab, Aliasghar</creator><creator>Yi, Jingang</creator><creator>Pei, Xiaofei</creator><creator>Guo, Xuexun</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope></search><sort><creationdate>20230701</creationdate><title>Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle</title><author>Yu, Jiaxing ; Arab, Aliasghar ; Yi, Jingang ; Pei, Xiaofei ; Guo, Xuexun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-afb1e609c30f6bcc1bb6e533bbc61b8f6378c6155bfd67ec0c2e247c21de8ea43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Autonomous vehicles</topic><topic>Computer Science</topic><topic>Decision making</topic><topic>Decision trees</topic><topic>Deep learning</topic><topic>Machine learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Motion planning</topic><topic>Neural networks</topic><topic>Processes</topic><topic>Vehicle safety</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yu, Jiaxing</creatorcontrib><creatorcontrib>Arab, Aliasghar</creatorcontrib><creatorcontrib>Yi, Jingang</creatorcontrib><creatorcontrib>Pei, Xiaofei</creatorcontrib><creatorcontrib>Guo, Xuexun</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Jiaxing</au><au>Arab, Aliasghar</au><au>Yi, Jingang</au><au>Pei, Xiaofei</au><au>Guo, Xuexun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023-07-01</date><risdate>2023</risdate><volume>53</volume><issue>13</issue><spage>16473</spage><epage>16486</epage><pages>16473-16486</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>This paper proposes a systematic driving framework where the decision making module of reinforcement learning (RL) is integrated with rapidly-exploring random tree (RRT) as motion planning. RL is used to generate local goals and semantic speed commands to control the longitudinal speed of a vehicle while rewards are designed for the driving safety and the traffic efficiency. Guaranteeing the driving comfort, RRT returns a feasible path to be followed by the vehicle with the speed commands. The scene decomposition approach is implemented to scale the deep neural network (DNN) to environments with multiple traffic participants and double deep Q-networks (DDQN) with prioritized experience replay (PER) is utilized to accelerate the training process. To handle the disturbance of the perception of the agent, we use an ensemble of neural networks to evaluate the uncertainty of decisions. It has shown that the proposed framework can tackle unexpected actions of traffic participants at an intersection yielding safe, comfort and efficient driving behaviors. Also, the ensemble of DDQN with PER is proved to be superior over standard DDQN in terms of learning efficiency and disturbance vulnerability.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-04358-7</doi><tpages>14</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2023-07, Vol.53 (13), p.16473-16486
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2831893969
source SpringerLink Journals
subjects Artificial Intelligence
Artificial neural networks
Autonomous vehicles
Computer Science
Decision making
Decision trees
Deep learning
Machine learning
Machines
Manufacturing
Mechanical Engineering
Motion planning
Neural networks
Processes
Vehicle safety
title Hierarchical framework integrating rapidly-exploring random tree with deep reinforcement learning for autonomous vehicle
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T01%3A04%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20framework%20integrating%20rapidly-exploring%20random%20tree%20with%20deep%20reinforcement%20learning%20for%20autonomous%20vehicle&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Yu,%20Jiaxing&rft.date=2023-07-01&rft.volume=53&rft.issue=13&rft.spage=16473&rft.epage=16486&rft.pages=16473-16486&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-04358-7&rft_dat=%3Cproquest_cross%3E2831893969%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2831893969&rft_id=info:pmid/&rfr_iscdi=true