Bayesian optimization for backpropagation in Monte-Carlo tree search
In large domains, Monte-Carlo tree search (MCTS) is required to estimate the values of the states as efficiently and accurately as possible. However, the standard update rule in backpropagation assumes a stationary distribution for the returns, and particularly in min-max trees, convergence to the t...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Yueqin Lim, Nengli |
description | In large domains, Monte-Carlo tree search (MCTS) is required to estimate the
values of the states as efficiently and accurately as possible. However, the
standard update rule in backpropagation assumes a stationary distribution for
the returns, and particularly in min-max trees, convergence to the true value
can be slow because of averaging. We present two methods, Softmax MCTS and
Monotone MCTS, which generalize previous attempts to improve upon the
backpropagation strategy. We demonstrate that both methods reduce to finding
optimal monotone functions, which we do so by performing Bayesian optimization
with a Gaussian process (GP) prior. We conduct experiments on computer Go,
where the returns are given by a deep value neural network, and show that our
proposed framework outperforms previous methods. |
doi_str_mv | 10.48550/arxiv.2001.09325 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2001_09325</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2001_09325</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-624ad5f66bd5ddc25b70b24a81f6177c1355799db0c18b38498a391c7c2104343</originalsourceid><addsrcrecordid>eNotz7lOw0AUheFpKFDgAaiYF7CZ7c5SglmlIJr01p3FMCLxWGMLEZ4eSFId6S-O9BFyxVmrLAC7wfqdv1rBGG-ZkwLOyf0d7tOccaRlWvIu_-CSy0iHUqnH8DnVMuH7seWRvpZxSU2HdVvoUlOic8IaPi7I2YDbOV2edkU2jw-b7rlZvz29dLfrBrWBRguFEQatfYQYgwBvmP9rlg-aGxO4BDDORc8Ct15a5SxKx4MJgjMllVyR6-PtgdFPNe-w7vt_Tn_gyF-D_ETF</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Bayesian optimization for backpropagation in Monte-Carlo tree search</title><source>arXiv.org</source><creator>Li, Yueqin ; Lim, Nengli</creator><creatorcontrib>Li, Yueqin ; Lim, Nengli</creatorcontrib><description>In large domains, Monte-Carlo tree search (MCTS) is required to estimate the
values of the states as efficiently and accurately as possible. However, the
standard update rule in backpropagation assumes a stationary distribution for
the returns, and particularly in min-max trees, convergence to the true value
can be slow because of averaging. We present two methods, Softmax MCTS and
Monotone MCTS, which generalize previous attempts to improve upon the
backpropagation strategy. We demonstrate that both methods reduce to finding
optimal monotone functions, which we do so by performing Bayesian optimization
with a Gaussian process (GP) prior. We conduct experiments on computer Go,
where the returns are given by a deep value neural network, and show that our
proposed framework outperforms previous methods.</description><identifier>DOI: 10.48550/arxiv.2001.09325</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2001.09325$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.09325$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yueqin</creatorcontrib><creatorcontrib>Lim, Nengli</creatorcontrib><title>Bayesian optimization for backpropagation in Monte-Carlo tree search</title><description>In large domains, Monte-Carlo tree search (MCTS) is required to estimate the
values of the states as efficiently and accurately as possible. However, the
standard update rule in backpropagation assumes a stationary distribution for
the returns, and particularly in min-max trees, convergence to the true value
can be slow because of averaging. We present two methods, Softmax MCTS and
Monotone MCTS, which generalize previous attempts to improve upon the
backpropagation strategy. We demonstrate that both methods reduce to finding
optimal monotone functions, which we do so by performing Bayesian optimization
with a Gaussian process (GP) prior. We conduct experiments on computer Go,
where the returns are given by a deep value neural network, and show that our
proposed framework outperforms previous methods.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7lOw0AUheFpKFDgAaiYF7CZ7c5SglmlIJr01p3FMCLxWGMLEZ4eSFId6S-O9BFyxVmrLAC7wfqdv1rBGG-ZkwLOyf0d7tOccaRlWvIu_-CSy0iHUqnH8DnVMuH7seWRvpZxSU2HdVvoUlOic8IaPi7I2YDbOV2edkU2jw-b7rlZvz29dLfrBrWBRguFEQatfYQYgwBvmP9rlg-aGxO4BDDORc8Ct15a5SxKx4MJgjMllVyR6-PtgdFPNe-w7vt_Tn_gyF-D_ETF</recordid><startdate>20200125</startdate><enddate>20200125</enddate><creator>Li, Yueqin</creator><creator>Lim, Nengli</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200125</creationdate><title>Bayesian optimization for backpropagation in Monte-Carlo tree search</title><author>Li, Yueqin ; Lim, Nengli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-624ad5f66bd5ddc25b70b24a81f6177c1355799db0c18b38498a391c7c2104343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yueqin</creatorcontrib><creatorcontrib>Lim, Nengli</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yueqin</au><au>Lim, Nengli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bayesian optimization for backpropagation in Monte-Carlo tree search</atitle><date>2020-01-25</date><risdate>2020</risdate><abstract>In large domains, Monte-Carlo tree search (MCTS) is required to estimate the
values of the states as efficiently and accurately as possible. However, the
standard update rule in backpropagation assumes a stationary distribution for
the returns, and particularly in min-max trees, convergence to the true value
can be slow because of averaging. We present two methods, Softmax MCTS and
Monotone MCTS, which generalize previous attempts to improve upon the
backpropagation strategy. We demonstrate that both methods reduce to finding
optimal monotone functions, which we do so by performing Bayesian optimization
with a Gaussian process (GP) prior. We conduct experiments on computer Go,
where the returns are given by a deep value neural network, and show that our
proposed framework outperforms previous methods.</abstract><doi>10.48550/arxiv.2001.09325</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2001.09325 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2001_09325 |
source | arXiv.org |
subjects | Computer Science - Learning Statistics - Machine Learning |
title | Bayesian optimization for backpropagation in Monte-Carlo tree search |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T10%3A12%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bayesian%20optimization%20for%20backpropagation%20in%20Monte-Carlo%20tree%20search&rft.au=Li,%20Yueqin&rft.date=2020-01-25&rft_id=info:doi/10.48550/arxiv.2001.09325&rft_dat=%3Carxiv_GOX%3E2001_09325%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |