Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving

Making safe and human-like decisions is an essential capability of autonomous driving systems, and learning-based behavior planning presents a promising pathway toward achieving this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2023-07, Vol.24 (7), p.7244-7258
Hauptverfasser: Huang, Zhiyu, Liu, Haochen, Wu, Jingda, Lv, Chen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7258
container_issue 7
container_start_page 7244
container_title IEEE transactions on intelligent transportation systems
container_volume 24
creator Huang, Zhiyu
Liu, Haochen
Wu, Jingda
Lv, Chen
description Making safe and human-like decisions is an essential capability of autonomous driving systems, and learning-based behavior planning presents a promising pathway toward achieving this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. This framework consists of three components: a behavior generation module that produces a diverse set of candidate behaviors in the form of trajectory proposals, a conditional motion prediction network that predicts future trajectories of other agents based on each proposal, and a scoring module that evaluates the candidate plans using maximum entropy inverse reinforcement learning (IRL). We validate the proposed framework on a large-scale real-world urban driving dataset through comprehensive experiments. The results show that the conditional prediction model can predict distinct and reasonable future trajectories given different trajectory proposals and the IRL-based scoring module can select plans that are close to human driving. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Additionally, we find that the conditional prediction model improves both prediction and planning performance compared to the non-conditional model. Lastly, we note that the learning of the scoring module is crucial for aligning the evaluations with human drivers.
doi_str_mv 10.1109/TITS.2023.3254579
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2834308245</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10073960</ieee_id><sourcerecordid>2834308245</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-82ca627501c8bdccdba506bf7659995dbb32431cadd805b3ac79fa20bced00923</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhosoOKc_QPAi4HXnSdK0zeWcHxsMHDrxsqTpqcvckpm0Bf-9nduFV-fw8rwHzhNF1xRGlIK8W86WbyMGjI84E4nI5Ek0oELkMQBNT_c7S2IJAs6jixDWfZoISgeRnThbmcY4qzZk4bEyujEdkntcqc44TxYbZa2xn-TDNCsysx36gOQVja2d17hF25A5Kv_H9BGZtltl47n5QjJuG2fd1rWBPHjT9cRldFarTcCr4xxG70-Py8k0nr88zybjeayZTJo4Z1qlLBNAdV5WWlelEpCWdZYKKaWoypKzhFOtqioHUXKlM1krBqXGCkAyPoxuD3d33n23GJpi7Vrf_xgKlvOEQ84S0VP0QGnvQvBYFztvtsr_FBSKvdZir7XYay2OWvvOzaFjEPEfDxmXKfBf1M513w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2834308245</pqid></control><display><type>article</type><title>Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving</title><source>IEEE Electronic Library (IEL)</source><creator>Huang, Zhiyu ; Liu, Haochen ; Wu, Jingda ; Lv, Chen</creator><creatorcontrib>Huang, Zhiyu ; Liu, Haochen ; Wu, Jingda ; Lv, Chen</creatorcontrib><description>Making safe and human-like decisions is an essential capability of autonomous driving systems, and learning-based behavior planning presents a promising pathway toward achieving this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. This framework consists of three components: a behavior generation module that produces a diverse set of candidate behaviors in the form of trajectory proposals, a conditional motion prediction network that predicts future trajectories of other agents based on each proposal, and a scoring module that evaluates the candidate plans using maximum entropy inverse reinforcement learning (IRL). We validate the proposed framework on a large-scale real-world urban driving dataset through comprehensive experiments. The results show that the conditional prediction model can predict distinct and reasonable future trajectories given different trajectory proposals and the IRL-based scoring module can select plans that are close to human driving. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Additionally, we find that the conditional prediction model improves both prediction and planning performance compared to the non-conditional model. Lastly, we note that the learning of the scoring module is crucial for aligning the evaluations with human drivers.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2023.3254579</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>autonomous driving ; Autonomous vehicles ; Behavior planning ; Behavioral sciences ; conditional motion prediction ; Decisions ; inverse reinforcement learning ; Maximum entropy ; Modules ; Planning ; Prediction models ; Predictive models ; Proposals ; Reinforcement learning ; Trajectory</subject><ispartof>IEEE transactions on intelligent transportation systems, 2023-07, Vol.24 (7), p.7244-7258</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-82ca627501c8bdccdba506bf7659995dbb32431cadd805b3ac79fa20bced00923</citedby><cites>FETCH-LOGICAL-c294t-82ca627501c8bdccdba506bf7659995dbb32431cadd805b3ac79fa20bced00923</cites><orcidid>0000-0001-6897-4512 ; 0000-0002-3628-8777 ; 0000-0002-7336-4492 ; 0000-0003-1592-7215</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10073960$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10073960$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Huang, Zhiyu</creatorcontrib><creatorcontrib>Liu, Haochen</creatorcontrib><creatorcontrib>Wu, Jingda</creatorcontrib><creatorcontrib>Lv, Chen</creatorcontrib><title>Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>Making safe and human-like decisions is an essential capability of autonomous driving systems, and learning-based behavior planning presents a promising pathway toward achieving this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. This framework consists of three components: a behavior generation module that produces a diverse set of candidate behaviors in the form of trajectory proposals, a conditional motion prediction network that predicts future trajectories of other agents based on each proposal, and a scoring module that evaluates the candidate plans using maximum entropy inverse reinforcement learning (IRL). We validate the proposed framework on a large-scale real-world urban driving dataset through comprehensive experiments. The results show that the conditional prediction model can predict distinct and reasonable future trajectories given different trajectory proposals and the IRL-based scoring module can select plans that are close to human driving. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Additionally, we find that the conditional prediction model improves both prediction and planning performance compared to the non-conditional model. Lastly, we note that the learning of the scoring module is crucial for aligning the evaluations with human drivers.</description><subject>autonomous driving</subject><subject>Autonomous vehicles</subject><subject>Behavior planning</subject><subject>Behavioral sciences</subject><subject>conditional motion prediction</subject><subject>Decisions</subject><subject>inverse reinforcement learning</subject><subject>Maximum entropy</subject><subject>Modules</subject><subject>Planning</subject><subject>Prediction models</subject><subject>Predictive models</subject><subject>Proposals</subject><subject>Reinforcement learning</subject><subject>Trajectory</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkF1LwzAUhosoOKc_QPAi4HXnSdK0zeWcHxsMHDrxsqTpqcvckpm0Bf-9nduFV-fw8rwHzhNF1xRGlIK8W86WbyMGjI84E4nI5Ek0oELkMQBNT_c7S2IJAs6jixDWfZoISgeRnThbmcY4qzZk4bEyujEdkntcqc44TxYbZa2xn-TDNCsysx36gOQVja2d17hF25A5Kv_H9BGZtltl47n5QjJuG2fd1rWBPHjT9cRldFarTcCr4xxG70-Py8k0nr88zybjeayZTJo4Z1qlLBNAdV5WWlelEpCWdZYKKaWoypKzhFOtqioHUXKlM1krBqXGCkAyPoxuD3d33n23GJpi7Vrf_xgKlvOEQ84S0VP0QGnvQvBYFztvtsr_FBSKvdZir7XYay2OWvvOzaFjEPEfDxmXKfBf1M513w</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Huang, Zhiyu</creator><creator>Liu, Haochen</creator><creator>Wu, Jingda</creator><creator>Lv, Chen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6897-4512</orcidid><orcidid>https://orcid.org/0000-0002-3628-8777</orcidid><orcidid>https://orcid.org/0000-0002-7336-4492</orcidid><orcidid>https://orcid.org/0000-0003-1592-7215</orcidid></search><sort><creationdate>20230701</creationdate><title>Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving</title><author>Huang, Zhiyu ; Liu, Haochen ; Wu, Jingda ; Lv, Chen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-82ca627501c8bdccdba506bf7659995dbb32431cadd805b3ac79fa20bced00923</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>autonomous driving</topic><topic>Autonomous vehicles</topic><topic>Behavior planning</topic><topic>Behavioral sciences</topic><topic>conditional motion prediction</topic><topic>Decisions</topic><topic>inverse reinforcement learning</topic><topic>Maximum entropy</topic><topic>Modules</topic><topic>Planning</topic><topic>Prediction models</topic><topic>Predictive models</topic><topic>Proposals</topic><topic>Reinforcement learning</topic><topic>Trajectory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huang, Zhiyu</creatorcontrib><creatorcontrib>Liu, Haochen</creatorcontrib><creatorcontrib>Wu, Jingda</creatorcontrib><creatorcontrib>Lv, Chen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Zhiyu</au><au>Liu, Haochen</au><au>Wu, Jingda</au><au>Lv, Chen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2023-07-01</date><risdate>2023</risdate><volume>24</volume><issue>7</issue><spage>7244</spage><epage>7258</epage><pages>7244-7258</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>Making safe and human-like decisions is an essential capability of autonomous driving systems, and learning-based behavior planning presents a promising pathway toward achieving this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. This framework consists of three components: a behavior generation module that produces a diverse set of candidate behaviors in the form of trajectory proposals, a conditional motion prediction network that predicts future trajectories of other agents based on each proposal, and a scoring module that evaluates the candidate plans using maximum entropy inverse reinforcement learning (IRL). We validate the proposed framework on a large-scale real-world urban driving dataset through comprehensive experiments. The results show that the conditional prediction model can predict distinct and reasonable future trajectories given different trajectory proposals and the IRL-based scoring module can select plans that are close to human driving. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Additionally, we find that the conditional prediction model improves both prediction and planning performance compared to the non-conditional model. Lastly, we note that the learning of the scoring module is crucial for aligning the evaluations with human drivers.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TITS.2023.3254579</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-6897-4512</orcidid><orcidid>https://orcid.org/0000-0002-3628-8777</orcidid><orcidid>https://orcid.org/0000-0002-7336-4492</orcidid><orcidid>https://orcid.org/0000-0003-1592-7215</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2023-07, Vol.24 (7), p.7244-7258
issn 1524-9050
1558-0016
language eng
recordid cdi_proquest_journals_2834308245
source IEEE Electronic Library (IEL)
subjects autonomous driving
Autonomous vehicles
Behavior planning
Behavioral sciences
conditional motion prediction
Decisions
inverse reinforcement learning
Maximum entropy
Modules
Planning
Prediction models
Predictive models
Proposals
Reinforcement learning
Trajectory
title Conditional Predictive Behavior Planning With Inverse Reinforcement Learning for Human-Like Autonomous Driving
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T05%3A16%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Conditional%20Predictive%20Behavior%20Planning%20With%20Inverse%20Reinforcement%20Learning%20for%20Human-Like%20Autonomous%20Driving&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Huang,%20Zhiyu&rft.date=2023-07-01&rft.volume=24&rft.issue=7&rft.spage=7244&rft.epage=7258&rft.pages=7244-7258&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2023.3254579&rft_dat=%3Cproquest_RIE%3E2834308245%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2834308245&rft_id=info:pmid/&rft_ieee_id=10073960&rfr_iscdi=true