Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving
Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on...
Gespeichert in:
Veröffentlicht in: | Frontiers of information technology & electronic engineering 2021-05, Vol.22 (5), p.673-686 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 686 |
---|---|
container_issue | 5 |
container_start_page | 673 |
container_title | Frontiers of information technology & electronic engineering |
container_volume | 22 |
creator | Wang, Yunpeng Zheng, Kunxian Tian, Daxin Duan, Xuting Zhou, Jianshan |
description | Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage. |
doi_str_mv | 10.1631/FITEE.1900637 |
format | Article |
fullrecord | <record><control><sourceid>wanfang_jour_proqu</sourceid><recordid>TN_cdi_wanfang_journals_zjdxxbc_e202105007</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><wanfj_id>zjdxxbc_e202105007</wanfj_id><sourcerecordid>zjdxxbc_e202105007</sourcerecordid><originalsourceid>FETCH-LOGICAL-c339t-14f7e60f80864123bb8258adaf6ec0fad9c22e5d84cc9b9f2dcfc312cd1aaf1d3</originalsourceid><addsrcrecordid>eNpt0EtrAjEQB_BQWqhYj70v9FZYO0n2lWMRbQWhPdhzyOahK5q1ya6PfvpG1-KlpwnMbybDH6FHDEOcUfwymc7H4yFmABnNb1CPAEtjRijc_r1xkdyjgfcrAMAZZjkresh8Oh03TlS2sotoXzXLSPijlUtX27r1kW-32u0qr1W01sKdlald5HRlQ5V6o21zbZXiJEXbhOnNaV65ahcaD-jOiLXXg0vto6_JeD56j2cfb9PR6yyWlLImxonJdQamgCJLMKFlWZC0EEqYTEswQjFJiE5VkUjJSmaIkkZSTKTCQhisaB89d3v3whphF3xVt86GH_nPSh0OpeSaAMGQAuQBP3V46-rvVvvmqklIKydpOCOouFPS1d47bfjWVRvhjhwDP0XPz9HzS_TBDzvvg7ML7a5b_x_4BRigiD8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918725086</pqid></control><display><type>article</type><title>Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving</title><source>Springer Nature - Complete Springer Journals</source><source>Alma/SFX Local Collection</source><source>ProQuest Central</source><creator>Wang, Yunpeng ; Zheng, Kunxian ; Tian, Daxin ; Duan, Xuting ; Zhou, Jianshan</creator><creatorcontrib>Wang, Yunpeng ; Zheng, Kunxian ; Tian, Daxin ; Duan, Xuting ; Zhou, Jianshan</creatorcontrib><description>Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.</description><identifier>ISSN: 2095-9184</identifier><identifier>EISSN: 2095-9230</identifier><identifier>DOI: 10.1631/FITEE.1900637</identifier><language>eng</language><publisher>Hangzhou: Zhejiang University Press</publisher><subject>Algorithms ; Communications Engineering ; Computer Hardware ; Computer Science ; Computer Systems Organization and Communication Networks ; Convergence ; Decision making ; Electrical Engineering ; Electronics and Microelectronics ; Innovations ; Instrumentation ; Machine learning ; Networks ; Race cars ; Sequential control ; Simulation ; Supervised learning</subject><ispartof>Frontiers of information technology & electronic engineering, 2021-05, Vol.22 (5), p.673-686</ispartof><rights>Zhejiang University Press 2021</rights><rights>Zhejiang University Press 2021.</rights><rights>Copyright © Wanfang Data Co. Ltd. All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c339t-14f7e60f80864123bb8258adaf6ec0fad9c22e5d84cc9b9f2dcfc312cd1aaf1d3</citedby><cites>FETCH-LOGICAL-c339t-14f7e60f80864123bb8258adaf6ec0fad9c22e5d84cc9b9f2dcfc312cd1aaf1d3</cites><orcidid>0000-0002-2887-9294</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttp://www.wanfangdata.com.cn/images/PeriodicalImages/zjdxxbc-e/zjdxxbc-e.jpg</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1631/FITEE.1900637$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918725086?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,21367,27901,27902,33721,41464,42533,43781,51294</link.rule.ids></links><search><creatorcontrib>Wang, Yunpeng</creatorcontrib><creatorcontrib>Zheng, Kunxian</creatorcontrib><creatorcontrib>Tian, Daxin</creatorcontrib><creatorcontrib>Duan, Xuting</creatorcontrib><creatorcontrib>Zhou, Jianshan</creatorcontrib><title>Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving</title><title>Frontiers of information technology & electronic engineering</title><addtitle>Front Inform Technol Electron Eng</addtitle><description>Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.</description><subject>Algorithms</subject><subject>Communications Engineering</subject><subject>Computer Hardware</subject><subject>Computer Science</subject><subject>Computer Systems Organization and Communication Networks</subject><subject>Convergence</subject><subject>Decision making</subject><subject>Electrical Engineering</subject><subject>Electronics and Microelectronics</subject><subject>Innovations</subject><subject>Instrumentation</subject><subject>Machine learning</subject><subject>Networks</subject><subject>Race cars</subject><subject>Sequential control</subject><subject>Simulation</subject><subject>Supervised learning</subject><issn>2095-9184</issn><issn>2095-9230</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpt0EtrAjEQB_BQWqhYj70v9FZYO0n2lWMRbQWhPdhzyOahK5q1ya6PfvpG1-KlpwnMbybDH6FHDEOcUfwymc7H4yFmABnNb1CPAEtjRijc_r1xkdyjgfcrAMAZZjkresh8Oh03TlS2sotoXzXLSPijlUtX27r1kW-32u0qr1W01sKdlald5HRlQ5V6o21zbZXiJEXbhOnNaV65ahcaD-jOiLXXg0vto6_JeD56j2cfb9PR6yyWlLImxonJdQamgCJLMKFlWZC0EEqYTEswQjFJiE5VkUjJSmaIkkZSTKTCQhisaB89d3v3whphF3xVt86GH_nPSh0OpeSaAMGQAuQBP3V46-rvVvvmqklIKydpOCOouFPS1d47bfjWVRvhjhwDP0XPz9HzS_TBDzvvg7ML7a5b_x_4BRigiD8</recordid><startdate>20210501</startdate><enddate>20210501</enddate><creator>Wang, Yunpeng</creator><creator>Zheng, Kunxian</creator><creator>Tian, Daxin</creator><creator>Duan, Xuting</creator><creator>Zhou, Jianshan</creator><general>Zhejiang University Press</general><general>Springer Nature B.V</general><general>Beijing Advanced Innovation Center for Big Data and Brain Computing,School of Transportation Science and Engineering,Beihang University,Beijing 100191,China</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>2B.</scope><scope>4A8</scope><scope>92I</scope><scope>93N</scope><scope>PSX</scope><scope>TCJ</scope><orcidid>https://orcid.org/0000-0002-2887-9294</orcidid></search><sort><creationdate>20210501</creationdate><title>Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving</title><author>Wang, Yunpeng ; Zheng, Kunxian ; Tian, Daxin ; Duan, Xuting ; Zhou, Jianshan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c339t-14f7e60f80864123bb8258adaf6ec0fad9c22e5d84cc9b9f2dcfc312cd1aaf1d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Communications Engineering</topic><topic>Computer Hardware</topic><topic>Computer Science</topic><topic>Computer Systems Organization and Communication Networks</topic><topic>Convergence</topic><topic>Decision making</topic><topic>Electrical Engineering</topic><topic>Electronics and Microelectronics</topic><topic>Innovations</topic><topic>Instrumentation</topic><topic>Machine learning</topic><topic>Networks</topic><topic>Race cars</topic><topic>Sequential control</topic><topic>Simulation</topic><topic>Supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yunpeng</creatorcontrib><creatorcontrib>Zheng, Kunxian</creatorcontrib><creatorcontrib>Tian, Daxin</creatorcontrib><creatorcontrib>Duan, Xuting</creatorcontrib><creatorcontrib>Zhou, Jianshan</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>Wanfang Data Journals - Hong Kong</collection><collection>WANFANG Data Centre</collection><collection>Wanfang Data Journals</collection><collection>万方数据期刊 - 香港版</collection><collection>China Online Journals (COJ)</collection><collection>China Online Journals (COJ)</collection><jtitle>Frontiers of information technology & electronic engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Yunpeng</au><au>Zheng, Kunxian</au><au>Tian, Daxin</au><au>Duan, Xuting</au><au>Zhou, Jianshan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving</atitle><jtitle>Frontiers of information technology & electronic engineering</jtitle><stitle>Front Inform Technol Electron Eng</stitle><date>2021-05-01</date><risdate>2021</risdate><volume>22</volume><issue>5</issue><spage>673</spage><epage>686</epage><pages>673-686</pages><issn>2095-9184</issn><eissn>2095-9230</eissn><abstract>Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules, so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However, poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance, making an RL-based model inapplicable in a real-world setting, particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically, prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel, on multiple driving demonstration data sets. After pre-training, the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator, TORCS (The Open Racing Car Simulator), to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition, a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.</abstract><cop>Hangzhou</cop><pub>Zhejiang University Press</pub><doi>10.1631/FITEE.1900637</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-2887-9294</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2095-9184 |
ispartof | Frontiers of information technology & electronic engineering, 2021-05, Vol.22 (5), p.673-686 |
issn | 2095-9184 2095-9230 |
language | eng |
recordid | cdi_wanfang_journals_zjdxxbc_e202105007 |
source | Springer Nature - Complete Springer Journals; Alma/SFX Local Collection; ProQuest Central |
subjects | Algorithms Communications Engineering Computer Hardware Computer Science Computer Systems Organization and Communication Networks Convergence Decision making Electrical Engineering Electronics and Microelectronics Innovations Instrumentation Machine learning Networks Race cars Sequential control Simulation Supervised learning |
title | Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T18%3A25%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wanfang_jour_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pre-training%20with%20asynchronous%20supervised%20learning%20for%20reinforcement%20learning%20based%20autonomous%20driving&rft.jtitle=Frontiers%20of%20information%20technology%20&%20electronic%20engineering&rft.au=Wang,%20Yunpeng&rft.date=2021-05-01&rft.volume=22&rft.issue=5&rft.spage=673&rft.epage=686&rft.pages=673-686&rft.issn=2095-9184&rft.eissn=2095-9230&rft_id=info:doi/10.1631/FITEE.1900637&rft_dat=%3Cwanfang_jour_proqu%3Ezjdxxbc_e202105007%3C/wanfang_jour_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918725086&rft_id=info:pmid/&rft_wanfj_id=zjdxxbc_e202105007&rfr_iscdi=true |