Hyperparameter Selection for Imitation Learning

We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hussenot, Leonard, Andrychowicz, Marcin, Vincent, Damien, Dadashi, Robert, Raichuk, Anton, Stafiniak, Lukasz, Girgin, Sertan, Marinier, Raphael, Momchev, Nikola, Ramos, Sabela, Orsini, Manu, Bachem, Olivier, Geist, Matthieu, Pietquin, Olivier
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hussenot, Leonard
Andrychowicz, Marcin
Vincent, Damien
Dadashi, Robert
Raichuk, Anton
Stafiniak, Lukasz
Girgin, Sertan
Marinier, Raphael
Momchev, Nikola
Ramos, Sabela
Orsini, Manu
Bachem, Olivier
Geist, Matthieu
Pietquin, Olivier
description We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.
doi_str_mv 10.48550/arxiv.2105.12034
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2105_12034</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2105_12034</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-f05b534e48cc3dd45b84e950f80cac9577e13aa9f018ae63262c1890313a52523</originalsourceid><addsrcrecordid>eNotzr1Ow0AQBOBrKCLDA6SKX8DO3s_a5xJFgCNZooh7a3PZi07ynw4L4bcHDNVophh9Quwl5MYiwpHiV_jMlQTMpQJtduJYrzPHmSINvHBML9yzW8I0pn6K6XkIC22tYYpjGO-P4sFT_8FP_5mI9vWlPdVZ8_52Pj03GRWlyTzgFbVhY53Tt5vBqzVcIXgLjlyFZclSE1UepCUutCqUk7YC_bOiQqUTcfi73cTdHMNAce1-5d0m198DRD0Z</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Hyperparameter Selection for Imitation Learning</title><source>arXiv.org</source><creator>Hussenot, Leonard ; Andrychowicz, Marcin ; Vincent, Damien ; Dadashi, Robert ; Raichuk, Anton ; Stafiniak, Lukasz ; Girgin, Sertan ; Marinier, Raphael ; Momchev, Nikola ; Ramos, Sabela ; Orsini, Manu ; Bachem, Olivier ; Geist, Matthieu ; Pietquin, Olivier</creator><creatorcontrib>Hussenot, Leonard ; Andrychowicz, Marcin ; Vincent, Damien ; Dadashi, Robert ; Raichuk, Anton ; Stafiniak, Lukasz ; Girgin, Sertan ; Marinier, Raphael ; Momchev, Nikola ; Ramos, Sabela ; Orsini, Manu ; Bachem, Olivier ; Geist, Matthieu ; Pietquin, Olivier</creatorcontrib><description>We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.</description><identifier>DOI: 10.48550/arxiv.2105.12034</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-05</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2105.12034$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2105.12034$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hussenot, Leonard</creatorcontrib><creatorcontrib>Andrychowicz, Marcin</creatorcontrib><creatorcontrib>Vincent, Damien</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Raichuk, Anton</creatorcontrib><creatorcontrib>Stafiniak, Lukasz</creatorcontrib><creatorcontrib>Girgin, Sertan</creatorcontrib><creatorcontrib>Marinier, Raphael</creatorcontrib><creatorcontrib>Momchev, Nikola</creatorcontrib><creatorcontrib>Ramos, Sabela</creatorcontrib><creatorcontrib>Orsini, Manu</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Pietquin, Olivier</creatorcontrib><title>Hyperparameter Selection for Imitation Learning</title><description>We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1Ow0AQBOBrKCLDA6SKX8DO3s_a5xJFgCNZooh7a3PZi07ynw4L4bcHDNVophh9Quwl5MYiwpHiV_jMlQTMpQJtduJYrzPHmSINvHBML9yzW8I0pn6K6XkIC22tYYpjGO-P4sFT_8FP_5mI9vWlPdVZ8_52Pj03GRWlyTzgFbVhY53Tt5vBqzVcIXgLjlyFZclSE1UepCUutCqUk7YC_bOiQqUTcfi73cTdHMNAce1-5d0m198DRD0Z</recordid><startdate>20210525</startdate><enddate>20210525</enddate><creator>Hussenot, Leonard</creator><creator>Andrychowicz, Marcin</creator><creator>Vincent, Damien</creator><creator>Dadashi, Robert</creator><creator>Raichuk, Anton</creator><creator>Stafiniak, Lukasz</creator><creator>Girgin, Sertan</creator><creator>Marinier, Raphael</creator><creator>Momchev, Nikola</creator><creator>Ramos, Sabela</creator><creator>Orsini, Manu</creator><creator>Bachem, Olivier</creator><creator>Geist, Matthieu</creator><creator>Pietquin, Olivier</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210525</creationdate><title>Hyperparameter Selection for Imitation Learning</title><author>Hussenot, Leonard ; Andrychowicz, Marcin ; Vincent, Damien ; Dadashi, Robert ; Raichuk, Anton ; Stafiniak, Lukasz ; Girgin, Sertan ; Marinier, Raphael ; Momchev, Nikola ; Ramos, Sabela ; Orsini, Manu ; Bachem, Olivier ; Geist, Matthieu ; Pietquin, Olivier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-f05b534e48cc3dd45b84e950f80cac9577e13aa9f018ae63262c1890313a52523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hussenot, Leonard</creatorcontrib><creatorcontrib>Andrychowicz, Marcin</creatorcontrib><creatorcontrib>Vincent, Damien</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Raichuk, Anton</creatorcontrib><creatorcontrib>Stafiniak, Lukasz</creatorcontrib><creatorcontrib>Girgin, Sertan</creatorcontrib><creatorcontrib>Marinier, Raphael</creatorcontrib><creatorcontrib>Momchev, Nikola</creatorcontrib><creatorcontrib>Ramos, Sabela</creatorcontrib><creatorcontrib>Orsini, Manu</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Pietquin, Olivier</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hussenot, Leonard</au><au>Andrychowicz, Marcin</au><au>Vincent, Damien</au><au>Dadashi, Robert</au><au>Raichuk, Anton</au><au>Stafiniak, Lukasz</au><au>Girgin, Sertan</au><au>Marinier, Raphael</au><au>Momchev, Nikola</au><au>Ramos, Sabela</au><au>Orsini, Manu</au><au>Bachem, Olivier</au><au>Geist, Matthieu</au><au>Pietquin, Olivier</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hyperparameter Selection for Imitation Learning</atitle><date>2021-05-25</date><risdate>2021</risdate><abstract>We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.</abstract><doi>10.48550/arxiv.2105.12034</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2105.12034
ispartof
issn
language eng
recordid cdi_arxiv_primary_2105_12034
source arXiv.org
subjects Computer Science - Learning
title Hyperparameter Selection for Imitation Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T14%3A37%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hyperparameter%20Selection%20for%20Imitation%20Learning&rft.au=Hussenot,%20Leonard&rft.date=2021-05-25&rft_id=info:doi/10.48550/arxiv.2105.12034&rft_dat=%3Carxiv_GOX%3E2105_12034%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true