Fast Efficient Hyperparameter Tuning for Policy Gradients

The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that le...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2019-09
Hauptverfasser: Supratik, Paul, Kurin, Vitaly, Whiteson, Shimon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Supratik, Paul
Kurin, Vitaly
Whiteson, Shimon
description The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free algorithm that requires no more than one training run to automatically adapt the hyperparameter that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2186334041</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2186334041</sourcerecordid><originalsourceid>FETCH-proquest_journals_21863340413</originalsourceid><addsrcrecordid>eNqNyrEKwjAUQNEgCBbtPwScC2leWussrR0dupdQE0mpSXxJhv69Cn6A0x3u2ZCMA5RFIzjfkTyEmTHG6xOvKsjIuZMh0lZrMxllI-1Xr9BLlE8VFdIhWWMfVDukN7eYaaVXlPevDAey1XIJKv91T45dO1z6wqN7JRXiOLuE9rNGXjY1gGCihP_UG4Q4Nm8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2186334041</pqid></control><display><type>article</type><title>Fast Efficient Hyperparameter Tuning for Policy Gradients</title><source>Free E- Journals</source><creator>Supratik, Paul ; Kurin, Vitaly ; Whiteson, Shimon</creator><creatorcontrib>Supratik, Paul ; Kurin, Vitaly ; Whiteson, Shimon</creatorcontrib><description>The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free algorithm that requires no more than one training run to automatically adapt the hyperparameter that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Domains ; Machine learning ; Optimization ; Schedules ; Tuning</subject><ispartof>arXiv.org, 2019-09</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Supratik, Paul</creatorcontrib><creatorcontrib>Kurin, Vitaly</creatorcontrib><creatorcontrib>Whiteson, Shimon</creatorcontrib><title>Fast Efficient Hyperparameter Tuning for Policy Gradients</title><title>arXiv.org</title><description>The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free algorithm that requires no more than one training run to automatically adapt the hyperparameter that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.</description><subject>Algorithms</subject><subject>Domains</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Schedules</subject><subject>Tuning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyrEKwjAUQNEgCBbtPwScC2leWussrR0dupdQE0mpSXxJhv69Cn6A0x3u2ZCMA5RFIzjfkTyEmTHG6xOvKsjIuZMh0lZrMxllI-1Xr9BLlE8VFdIhWWMfVDukN7eYaaVXlPevDAey1XIJKv91T45dO1z6wqN7JRXiOLuE9rNGXjY1gGCihP_UG4Q4Nm8</recordid><startdate>20190917</startdate><enddate>20190917</enddate><creator>Supratik, Paul</creator><creator>Kurin, Vitaly</creator><creator>Whiteson, Shimon</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190917</creationdate><title>Fast Efficient Hyperparameter Tuning for Policy Gradients</title><author>Supratik, Paul ; Kurin, Vitaly ; Whiteson, Shimon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_21863340413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Domains</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Schedules</topic><topic>Tuning</topic><toplevel>online_resources</toplevel><creatorcontrib>Supratik, Paul</creatorcontrib><creatorcontrib>Kurin, Vitaly</creatorcontrib><creatorcontrib>Whiteson, Shimon</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Supratik, Paul</au><au>Kurin, Vitaly</au><au>Whiteson, Shimon</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Fast Efficient Hyperparameter Tuning for Policy Gradients</atitle><jtitle>arXiv.org</jtitle><date>2019-09-17</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free algorithm that requires no more than one training run to automatically adapt the hyperparameter that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2186334041
source Free E- Journals
subjects Algorithms
Domains
Machine learning
Optimization
Schedules
Tuning
title Fast Efficient Hyperparameter Tuning for Policy Gradients
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T15%3A25%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Fast%20Efficient%20Hyperparameter%20Tuning%20for%20Policy%20Gradients&rft.jtitle=arXiv.org&rft.au=Supratik,%20Paul&rft.date=2019-09-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2186334041%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2186334041&rft_id=info:pmid/&rfr_iscdi=true