IQ-Learn: Inverse soft-Q Learning for Imitation

In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional enviro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Garg, Divyansh, Chakraborty, Shuvam, Cundy, Chris, Song, Jiaming, Geist, Matthieu, Ermon, Stefano
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Garg, Divyansh
Chakraborty, Shuvam
Cundy, Chris
Song, Jiaming
Geist, Matthieu
Ermon, Stefano
description In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.
doi_str_mv 10.48550/arxiv.2106.12142
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_12142</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_12142</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-5c759d12276259f9b1fd9ada1d4b0a009eaa2dfd4f5a42cdeb1263d0596917c73</originalsourceid><addsrcrecordid>eNotzr1qwzAUhmEtHUraC-hU3YBsnWP9RN2KSVqDIQSym2NLKoLELrIJzd2Hup0-eIePh7EXkIXaai1Lyj_pWiBIUwCCwkdWNkfRBsrjG2_Ga8hz4PMUF3Hka03jF49T5s0lLbSkaXxiD5HOc3j-3w077Xen-lO0h4-mfm8FGYtCD1Y7D4jWoHbR9RC9I0_gVS9JSheI0EevoiaFgw89oKm81M44sIOtNuz173YVd985XSjful95t8qrO0xJPMM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>IQ-Learn: Inverse soft-Q Learning for Imitation</title><source>arXiv.org</source><creator>Garg, Divyansh ; Chakraborty, Shuvam ; Cundy, Chris ; Song, Jiaming ; Geist, Matthieu ; Ermon, Stefano</creator><creatorcontrib>Garg, Divyansh ; Chakraborty, Shuvam ; Cundy, Chris ; Song, Jiaming ; Geist, Matthieu ; Ermon, Stefano</creatorcontrib><description>In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.</description><identifier>DOI: 10.48550/arxiv.2106.12142</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.12142$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.12142$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Garg, Divyansh</creatorcontrib><creatorcontrib>Chakraborty, Shuvam</creatorcontrib><creatorcontrib>Cundy, Chris</creatorcontrib><creatorcontrib>Song, Jiaming</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Ermon, Stefano</creatorcontrib><title>IQ-Learn: Inverse soft-Q Learning for Imitation</title><description>In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1qwzAUhmEtHUraC-hU3YBsnWP9RN2KSVqDIQSym2NLKoLELrIJzd2Hup0-eIePh7EXkIXaai1Lyj_pWiBIUwCCwkdWNkfRBsrjG2_Ga8hz4PMUF3Hka03jF49T5s0lLbSkaXxiD5HOc3j-3w077Xen-lO0h4-mfm8FGYtCD1Y7D4jWoHbR9RC9I0_gVS9JSheI0EevoiaFgw89oKm81M44sIOtNuz173YVd985XSjful95t8qrO0xJPMM</recordid><startdate>20210622</startdate><enddate>20210622</enddate><creator>Garg, Divyansh</creator><creator>Chakraborty, Shuvam</creator><creator>Cundy, Chris</creator><creator>Song, Jiaming</creator><creator>Geist, Matthieu</creator><creator>Ermon, Stefano</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210622</creationdate><title>IQ-Learn: Inverse soft-Q Learning for Imitation</title><author>Garg, Divyansh ; Chakraborty, Shuvam ; Cundy, Chris ; Song, Jiaming ; Geist, Matthieu ; Ermon, Stefano</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-5c759d12276259f9b1fd9ada1d4b0a009eaa2dfd4f5a42cdeb1263d0596917c73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Garg, Divyansh</creatorcontrib><creatorcontrib>Chakraborty, Shuvam</creatorcontrib><creatorcontrib>Cundy, Chris</creatorcontrib><creatorcontrib>Song, Jiaming</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Ermon, Stefano</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Garg, Divyansh</au><au>Chakraborty, Shuvam</au><au>Cundy, Chris</au><au>Song, Jiaming</au><au>Geist, Matthieu</au><au>Ermon, Stefano</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>IQ-Learn: Inverse soft-Q Learning for Imitation</atitle><date>2021-06-22</date><risdate>2021</risdate><abstract>In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.</abstract><doi>10.48550/arxiv.2106.12142</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.12142
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_12142
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title IQ-Learn: Inverse soft-Q Learning for Imitation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T03%3A27%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=IQ-Learn:%20Inverse%20soft-Q%20Learning%20for%20Imitation&rft.au=Garg,%20Divyansh&rft.date=2021-06-22&rft_id=info:doi/10.48550/arxiv.2106.12142&rft_dat=%3Carxiv_GOX%3E2106_12142%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true