Example-guided learning of stochastic human driving policies using deep reinforcement learning

Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2023-08, Vol.35 (23), p.16791-16804
Hauptverfasser: Emuna, Ran, Duffney, Rotem, Borowsky, Avinoam, Biess, Armin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 16804
container_issue 23
container_start_page 16791
container_title Neural computing & applications
container_volume 35
creator Emuna, Ran
Duffney, Rotem
Borowsky, Avinoam
Biess, Armin
description Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robotics and autonomous vehicles. Here we introduce a model-free and easy-to-implement deep reinforcement learning approach to mimic the stochastic behavior of a human expert by learning distributions of task variables from examples. As tractable use-cases, we study static and dynamic obstacle avoidance tasks for an autonomous vehicle on a highway road in simulation (Unity). Our control algorithm receives a feedback signal from two sources: a deterministic (handcrafted) part encoding basic task goals and a stochastic (data-driven) part that incorporates human expert knowledge. Gaussian processes are used to model human state distributions and to assess the similarity between machine and human behavior. Using this generic approach, we demonstrate that the learning agent acquires human-like driving skills and can generalize to new roads and obstacle distributions unseen during training.
doi_str_mv 10.1007/s00521-022-07947-2
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2836113897</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2836113897</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-7c43f508b8415e83ce2673e2abba93b4f1b97f7896422b450d44bb1285160c013</originalsourceid><addsrcrecordid>eNp9kMtKxDAUhoMoOI6-gKuC6-rJpU26lGG8wIAb3RqS9HQmQ28mrejb27GiO1eHn_9y4CPkksI1BZA3ESBjNAXGUpCFkCk7IgsqOE85ZOqYLKAQk50LfkrOYtwDgMhVtiCv6w_T9DWm29GXWCY1mtD6dpt0VRKHzu1MHLxLdmNj2qQM_v3g9V3tnceYjPEgS8Q-CejbqgsOG2yH35lzclKZOuLFz12Sl7v18-oh3TzdP65uN6njtBhS6QSvMlBWCZqh4g5ZLjkyY60puBUVtYWspCpywZgVGZRCWEuZymgODihfkqt5tw_d24hx0PtuDO30UjPFc0q5KuSUYnPKhS7GgJXug29M-NQU9IGjnjnqiaP-5qjZVOJzKU7hdovhb_qf1hfpkXYX</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2836113897</pqid></control><display><type>article</type><title>Example-guided learning of stochastic human driving policies using deep reinforcement learning</title><source>SpringerNature Journals</source><creator>Emuna, Ran ; Duffney, Rotem ; Borowsky, Avinoam ; Biess, Armin</creator><creatorcontrib>Emuna, Ran ; Duffney, Rotem ; Borowsky, Avinoam ; Biess, Armin</creatorcontrib><description>Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robotics and autonomous vehicles. Here we introduce a model-free and easy-to-implement deep reinforcement learning approach to mimic the stochastic behavior of a human expert by learning distributions of task variables from examples. As tractable use-cases, we study static and dynamic obstacle avoidance tasks for an autonomous vehicle on a highway road in simulation (Unity). Our control algorithm receives a feedback signal from two sources: a deterministic (handcrafted) part encoding basic task goals and a stochastic (data-driven) part that incorporates human expert knowledge. Gaussian processes are used to model human state distributions and to assess the similarity between machine and human behavior. Using this generic approach, we demonstrate that the learning agent acquires human-like driving skills and can generalize to new roads and obstacle distributions unseen during training.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-022-07947-2</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Agents (artificial intelligence) ; Algorithms ; Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Control algorithms ; Control theory ; Data Mining and Knowledge Discovery ; Deep learning ; Driving ; Gaussian process ; Human behavior ; Image Processing and Computer Vision ; Obstacle avoidance ; Probability and Statistics in Computer Science ; Robotics ; S.I.: Human-aligned Reinforcement Learning for Autonomous Agents and Robots ; Special Issue on Human-aligned Reinforcement Learning for Autonomous Agents and Robots</subject><ispartof>Neural computing &amp; applications, 2023-08, Vol.35 (23), p.16791-16804</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-7c43f508b8415e83ce2673e2abba93b4f1b97f7896422b450d44bb1285160c013</citedby><cites>FETCH-LOGICAL-c319t-7c43f508b8415e83ce2673e2abba93b4f1b97f7896422b450d44bb1285160c013</cites><orcidid>0000-0002-0087-3675</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-022-07947-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-022-07947-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Emuna, Ran</creatorcontrib><creatorcontrib>Duffney, Rotem</creatorcontrib><creatorcontrib>Borowsky, Avinoam</creatorcontrib><creatorcontrib>Biess, Armin</creatorcontrib><title>Example-guided learning of stochastic human driving policies using deep reinforcement learning</title><title>Neural computing &amp; applications</title><addtitle>Neural Comput &amp; Applic</addtitle><description>Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robotics and autonomous vehicles. Here we introduce a model-free and easy-to-implement deep reinforcement learning approach to mimic the stochastic behavior of a human expert by learning distributions of task variables from examples. As tractable use-cases, we study static and dynamic obstacle avoidance tasks for an autonomous vehicle on a highway road in simulation (Unity). Our control algorithm receives a feedback signal from two sources: a deterministic (handcrafted) part encoding basic task goals and a stochastic (data-driven) part that incorporates human expert knowledge. Gaussian processes are used to model human state distributions and to assess the similarity between machine and human behavior. Using this generic approach, we demonstrate that the learning agent acquires human-like driving skills and can generalize to new roads and obstacle distributions unseen during training.</description><subject>Agents (artificial intelligence)</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Control algorithms</subject><subject>Control theory</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Deep learning</subject><subject>Driving</subject><subject>Gaussian process</subject><subject>Human behavior</subject><subject>Image Processing and Computer Vision</subject><subject>Obstacle avoidance</subject><subject>Probability and Statistics in Computer Science</subject><subject>Robotics</subject><subject>S.I.: Human-aligned Reinforcement Learning for Autonomous Agents and Robots</subject><subject>Special Issue on Human-aligned Reinforcement Learning for Autonomous Agents and Robots</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kMtKxDAUhoMoOI6-gKuC6-rJpU26lGG8wIAb3RqS9HQmQ28mrejb27GiO1eHn_9y4CPkksI1BZA3ESBjNAXGUpCFkCk7IgsqOE85ZOqYLKAQk50LfkrOYtwDgMhVtiCv6w_T9DWm29GXWCY1mtD6dpt0VRKHzu1MHLxLdmNj2qQM_v3g9V3tnceYjPEgS8Q-CejbqgsOG2yH35lzclKZOuLFz12Sl7v18-oh3TzdP65uN6njtBhS6QSvMlBWCZqh4g5ZLjkyY60puBUVtYWspCpywZgVGZRCWEuZymgODihfkqt5tw_d24hx0PtuDO30UjPFc0q5KuSUYnPKhS7GgJXug29M-NQU9IGjnjnqiaP-5qjZVOJzKU7hdovhb_qf1hfpkXYX</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Emuna, Ran</creator><creator>Duffney, Rotem</creator><creator>Borowsky, Avinoam</creator><creator>Biess, Armin</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0002-0087-3675</orcidid></search><sort><creationdate>20230801</creationdate><title>Example-guided learning of stochastic human driving policies using deep reinforcement learning</title><author>Emuna, Ran ; Duffney, Rotem ; Borowsky, Avinoam ; Biess, Armin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-7c43f508b8415e83ce2673e2abba93b4f1b97f7896422b450d44bb1285160c013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Agents (artificial intelligence)</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Control algorithms</topic><topic>Control theory</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Deep learning</topic><topic>Driving</topic><topic>Gaussian process</topic><topic>Human behavior</topic><topic>Image Processing and Computer Vision</topic><topic>Obstacle avoidance</topic><topic>Probability and Statistics in Computer Science</topic><topic>Robotics</topic><topic>S.I.: Human-aligned Reinforcement Learning for Autonomous Agents and Robots</topic><topic>Special Issue on Human-aligned Reinforcement Learning for Autonomous Agents and Robots</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Emuna, Ran</creatorcontrib><creatorcontrib>Duffney, Rotem</creatorcontrib><creatorcontrib>Borowsky, Avinoam</creatorcontrib><creatorcontrib>Biess, Armin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Neural computing &amp; applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Emuna, Ran</au><au>Duffney, Rotem</au><au>Borowsky, Avinoam</au><au>Biess, Armin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Example-guided learning of stochastic human driving policies using deep reinforcement learning</atitle><jtitle>Neural computing &amp; applications</jtitle><stitle>Neural Comput &amp; Applic</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>35</volume><issue>23</issue><spage>16791</spage><epage>16804</epage><pages>16791-16804</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robotics and autonomous vehicles. Here we introduce a model-free and easy-to-implement deep reinforcement learning approach to mimic the stochastic behavior of a human expert by learning distributions of task variables from examples. As tractable use-cases, we study static and dynamic obstacle avoidance tasks for an autonomous vehicle on a highway road in simulation (Unity). Our control algorithm receives a feedback signal from two sources: a deterministic (handcrafted) part encoding basic task goals and a stochastic (data-driven) part that incorporates human expert knowledge. Gaussian processes are used to model human state distributions and to assess the similarity between machine and human behavior. Using this generic approach, we demonstrate that the learning agent acquires human-like driving skills and can generalize to new roads and obstacle distributions unseen during training.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-022-07947-2</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-0087-3675</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0941-0643
ispartof Neural computing & applications, 2023-08, Vol.35 (23), p.16791-16804
issn 0941-0643
1433-3058
language eng
recordid cdi_proquest_journals_2836113897
source SpringerNature Journals
subjects Agents (artificial intelligence)
Algorithms
Artificial Intelligence
Computational Biology/Bioinformatics
Computational Science and Engineering
Computer Science
Control algorithms
Control theory
Data Mining and Knowledge Discovery
Deep learning
Driving
Gaussian process
Human behavior
Image Processing and Computer Vision
Obstacle avoidance
Probability and Statistics in Computer Science
Robotics
S.I.: Human-aligned Reinforcement Learning for Autonomous Agents and Robots
Special Issue on Human-aligned Reinforcement Learning for Autonomous Agents and Robots
title Example-guided learning of stochastic human driving policies using deep reinforcement learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T13%3A54%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Example-guided%20learning%20of%20stochastic%20human%20driving%20policies%20using%20deep%20reinforcement%20learning&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Emuna,%20Ran&rft.date=2023-08-01&rft.volume=35&rft.issue=23&rft.spage=16791&rft.epage=16804&rft.pages=16791-16804&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-022-07947-2&rft_dat=%3Cproquest_cross%3E2836113897%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2836113897&rft_id=info:pmid/&rfr_iscdi=true