Thompson Sampling with Information Relaxation Penalties
We consider a finite-horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an information relaxation sampling framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoin...
Gespeichert in:
Veröffentlicht in: | Management science 2024-05 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | Management science |
container_volume | |
creator | Min, Seungki Maglaras, Costis Moallemi, Ciamac C. |
description | We consider a finite-horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an
information relaxation sampling
framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of “penalties” that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration-exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular, in settings where the finite time horizon introduces significant exploration-exploitation tension into the problem. Finally, inspired by the finite-horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
This paper was accepted by Hamid Nazerzadeh, data science.
Funding:
This research was supported by the National Research Foundation of Korea [NRF-2022R1C1C1013402].
Supplemental Material:
The electronic companion and data files are available at
https://doi.org/10.1287/mnsc.2020.01396
. |
doi_str_mv | 10.1287/mnsc.2020.01396 |
format | Article |
fullrecord | <record><control><sourceid>crossref_infor</sourceid><recordid>TN_cdi_crossref_primary_10_1287_mnsc_2020_01396</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1287_mnsc_2020_01396</sourcerecordid><originalsourceid>FETCH-LOGICAL-c164t-66ccfb7682aeb9b5b8a98f6221679b8b48e68daa9a59c502004f23798bfff8853</originalsourceid><addsrcrecordid>eNqFj0tLxDAUhYMoWEfXbvsH2rlJm9dSBh8DA4qO65DExIk0bWkK6r-3nc7e1T1w-Q7nQ-gWQ4mJ4OvYJlsSIFACriQ7QxmmhBWUAj5HGQChBZYgL9FVSl8AwAVnGeL7Qxf71LX5m459E9rP_DuMh3zb-m6IegzT59U1-meJL67VzRhcukYXXjfJ3ZzuCr0_3O83T8Xu-XG7udsVFrN6LBiz1hvOBNHOSEON0FJ4RghmXBphauGY-NBaaiotncZD7UnFpTDeeyFotULrpdcOXUqD86ofQtTDr8KgZm81e6vZWx29J6JciHBUSP8Cf2qCW3c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Thompson Sampling with Information Relaxation Penalties</title><source>美国运筹学和管理学研究协会期刊(NSTL购买)</source><creator>Min, Seungki ; Maglaras, Costis ; Moallemi, Ciamac C.</creator><creatorcontrib>Min, Seungki ; Maglaras, Costis ; Moallemi, Ciamac C.</creatorcontrib><description>We consider a finite-horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an
information relaxation sampling
framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of “penalties” that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration-exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular, in settings where the finite time horizon introduces significant exploration-exploitation tension into the problem. Finally, inspired by the finite-horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
This paper was accepted by Hamid Nazerzadeh, data science.
Funding:
This research was supported by the National Research Foundation of Korea [NRF-2022R1C1C1013402].
Supplemental Material:
The electronic companion and data files are available at
https://doi.org/10.1287/mnsc.2020.01396
.</description><identifier>ISSN: 0025-1909</identifier><identifier>EISSN: 1526-5501</identifier><identifier>DOI: 10.1287/mnsc.2020.01396</identifier><language>eng</language><publisher>INFORMS</publisher><subject>dynamic programming: Bayesian ; dynamic programming: Markov ; dynamic programming: optimal control</subject><ispartof>Management science, 2024-05</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c164t-66ccfb7682aeb9b5b8a98f6221679b8b48e68daa9a59c502004f23798bfff8853</cites><orcidid>0000-0002-2887-1030 ; 0000-0002-4489-9260</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,3692,27924,27925</link.rule.ids></links><search><creatorcontrib>Min, Seungki</creatorcontrib><creatorcontrib>Maglaras, Costis</creatorcontrib><creatorcontrib>Moallemi, Ciamac C.</creatorcontrib><title>Thompson Sampling with Information Relaxation Penalties</title><title>Management science</title><description>We consider a finite-horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an
information relaxation sampling
framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of “penalties” that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration-exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular, in settings where the finite time horizon introduces significant exploration-exploitation tension into the problem. Finally, inspired by the finite-horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
This paper was accepted by Hamid Nazerzadeh, data science.
Funding:
This research was supported by the National Research Foundation of Korea [NRF-2022R1C1C1013402].
Supplemental Material:
The electronic companion and data files are available at
https://doi.org/10.1287/mnsc.2020.01396
.</description><subject>dynamic programming: Bayesian</subject><subject>dynamic programming: Markov</subject><subject>dynamic programming: optimal control</subject><issn>0025-1909</issn><issn>1526-5501</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNqFj0tLxDAUhYMoWEfXbvsH2rlJm9dSBh8DA4qO65DExIk0bWkK6r-3nc7e1T1w-Q7nQ-gWQ4mJ4OvYJlsSIFACriQ7QxmmhBWUAj5HGQChBZYgL9FVSl8AwAVnGeL7Qxf71LX5m459E9rP_DuMh3zb-m6IegzT59U1-meJL67VzRhcukYXXjfJ3ZzuCr0_3O83T8Xu-XG7udsVFrN6LBiz1hvOBNHOSEON0FJ4RghmXBphauGY-NBaaiotncZD7UnFpTDeeyFotULrpdcOXUqD86ofQtTDr8KgZm81e6vZWx29J6JciHBUSP8Cf2qCW3c</recordid><startdate>20240522</startdate><enddate>20240522</enddate><creator>Min, Seungki</creator><creator>Maglaras, Costis</creator><creator>Moallemi, Ciamac C.</creator><general>INFORMS</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-2887-1030</orcidid><orcidid>https://orcid.org/0000-0002-4489-9260</orcidid></search><sort><creationdate>20240522</creationdate><title>Thompson Sampling with Information Relaxation Penalties</title><author>Min, Seungki ; Maglaras, Costis ; Moallemi, Ciamac C.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c164t-66ccfb7682aeb9b5b8a98f6221679b8b48e68daa9a59c502004f23798bfff8853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>dynamic programming: Bayesian</topic><topic>dynamic programming: Markov</topic><topic>dynamic programming: optimal control</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Min, Seungki</creatorcontrib><creatorcontrib>Maglaras, Costis</creatorcontrib><creatorcontrib>Moallemi, Ciamac C.</creatorcontrib><collection>CrossRef</collection><jtitle>Management science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Min, Seungki</au><au>Maglaras, Costis</au><au>Moallemi, Ciamac C.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Thompson Sampling with Information Relaxation Penalties</atitle><jtitle>Management science</jtitle><date>2024-05-22</date><risdate>2024</risdate><issn>0025-1909</issn><eissn>1526-5501</eissn><abstract>We consider a finite-horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an
information relaxation sampling
framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of “penalties” that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration-exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular, in settings where the finite time horizon introduces significant exploration-exploitation tension into the problem. Finally, inspired by the finite-horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
This paper was accepted by Hamid Nazerzadeh, data science.
Funding:
This research was supported by the National Research Foundation of Korea [NRF-2022R1C1C1013402].
Supplemental Material:
The electronic companion and data files are available at
https://doi.org/10.1287/mnsc.2020.01396
.</abstract><pub>INFORMS</pub><doi>10.1287/mnsc.2020.01396</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0002-2887-1030</orcidid><orcidid>https://orcid.org/0000-0002-4489-9260</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0025-1909 |
ispartof | Management science, 2024-05 |
issn | 0025-1909 1526-5501 |
language | eng |
recordid | cdi_crossref_primary_10_1287_mnsc_2020_01396 |
source | 美国运筹学和管理学研究协会期刊(NSTL购买) |
subjects | dynamic programming: Bayesian dynamic programming: Markov dynamic programming: optimal control |
title | Thompson Sampling with Information Relaxation Penalties |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T10%3A20%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_infor&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Thompson%20Sampling%20with%20Information%20Relaxation%20Penalties&rft.jtitle=Management%20science&rft.au=Min,%20Seungki&rft.date=2024-05-22&rft.issn=0025-1909&rft.eissn=1526-5501&rft_id=info:doi/10.1287/mnsc.2020.01396&rft_dat=%3Ccrossref_infor%3E10_1287_mnsc_2020_01396%3C/crossref_infor%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |