Satisficing in Multi-Armed Bandit Problems

Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We sho...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 2017-08, Vol.62 (8), p.3788-3803
Hauptverfasser: Reverdy, Paul, Srivastava, Vaibhav, Leonard, Naomi Ehrich
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3803
container_issue 8
container_start_page 3788
container_title IEEE transactions on automatic control
container_volume 62
creator Reverdy, Paul
Srivastava, Vaibhav
Leonard, Naomi Ehrich
description Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.
doi_str_mv 10.1109/TAC.2016.2644380
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_7795183</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7795183</ieee_id><sourcerecordid>10_1109_TAC_2016_2644380</sourcerecordid><originalsourceid>FETCH-LOGICAL-c305t-393d46b938eed41681b48b60a7e32b0d1b508d129a0f93deead4c9db80ed1d903</originalsourceid><addsrcrecordid>eNo9j0FLxDAUhIMoWFfvgpeehdb3kqZNjrW4KqwouJ5D0rxKpO1KUw_-e7vs4mkY-GbgY-waIUcEfbetm5wDljkvi0IoOGEJSqkyLrk4ZQkAqkxzVZ6zixi_lrpgmLDbdzuH2IU2jJ9pGNOXn34OWT0N5NN7O_owp2_TzvU0xEt21tk-0tUxV-xj_bBtnrLN6-NzU2-yVoCcM6GFL0qnhSLyBZYKXaFcCbYiwR14dBKUR64tdAtKZH3Rau8UkEevQawYHH7baRfjRJ35nsJgp1-DYPauZnE1e1dzdF0mN4dJIKJ_vKq0RCXEH60mTt0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Satisficing in Multi-Armed Bandit Problems</title><source>IEEE Electronic Library (IEL)</source><creator>Reverdy, Paul ; Srivastava, Vaibhav ; Leonard, Naomi Ehrich</creator><creatorcontrib>Reverdy, Paul ; Srivastava, Vaibhav ; Leonard, Naomi Ehrich</creatorcontrib><description>Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.</description><identifier>ISSN: 0018-9286</identifier><identifier>EISSN: 1558-2523</identifier><identifier>DOI: 10.1109/TAC.2016.2644380</identifier><identifier>CODEN: IETAA9</identifier><language>eng</language><publisher>IEEE</publisher><subject>Algorithm design and analysis ; Context ; Decision making ; Face ; Linear programming ; Multi-armed bandit ; Robustness ; upper credible limit (UCL)</subject><ispartof>IEEE transactions on automatic control, 2017-08, Vol.62 (8), p.3788-3803</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c305t-393d46b938eed41681b48b60a7e32b0d1b508d129a0f93deead4c9db80ed1d903</citedby><cites>FETCH-LOGICAL-c305t-393d46b938eed41681b48b60a7e32b0d1b508d129a0f93deead4c9db80ed1d903</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7795183$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7795183$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Reverdy, Paul</creatorcontrib><creatorcontrib>Srivastava, Vaibhav</creatorcontrib><creatorcontrib>Leonard, Naomi Ehrich</creatorcontrib><title>Satisficing in Multi-Armed Bandit Problems</title><title>IEEE transactions on automatic control</title><addtitle>TAC</addtitle><description>Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.</description><subject>Algorithm design and analysis</subject><subject>Context</subject><subject>Decision making</subject><subject>Face</subject><subject>Linear programming</subject><subject>Multi-armed bandit</subject><subject>Robustness</subject><subject>upper credible limit (UCL)</subject><issn>0018-9286</issn><issn>1558-2523</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9j0FLxDAUhIMoWFfvgpeehdb3kqZNjrW4KqwouJ5D0rxKpO1KUw_-e7vs4mkY-GbgY-waIUcEfbetm5wDljkvi0IoOGEJSqkyLrk4ZQkAqkxzVZ6zixi_lrpgmLDbdzuH2IU2jJ9pGNOXn34OWT0N5NN7O_owp2_TzvU0xEt21tk-0tUxV-xj_bBtnrLN6-NzU2-yVoCcM6GFL0qnhSLyBZYKXaFcCbYiwR14dBKUR64tdAtKZH3Rau8UkEevQawYHH7baRfjRJ35nsJgp1-DYPauZnE1e1dzdF0mN4dJIKJ_vKq0RCXEH60mTt0</recordid><startdate>201708</startdate><enddate>201708</enddate><creator>Reverdy, Paul</creator><creator>Srivastava, Vaibhav</creator><creator>Leonard, Naomi Ehrich</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>201708</creationdate><title>Satisficing in Multi-Armed Bandit Problems</title><author>Reverdy, Paul ; Srivastava, Vaibhav ; Leonard, Naomi Ehrich</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c305t-393d46b938eed41681b48b60a7e32b0d1b508d129a0f93deead4c9db80ed1d903</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithm design and analysis</topic><topic>Context</topic><topic>Decision making</topic><topic>Face</topic><topic>Linear programming</topic><topic>Multi-armed bandit</topic><topic>Robustness</topic><topic>upper credible limit (UCL)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Reverdy, Paul</creatorcontrib><creatorcontrib>Srivastava, Vaibhav</creatorcontrib><creatorcontrib>Leonard, Naomi Ehrich</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on automatic control</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Reverdy, Paul</au><au>Srivastava, Vaibhav</au><au>Leonard, Naomi Ehrich</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Satisficing in Multi-Armed Bandit Problems</atitle><jtitle>IEEE transactions on automatic control</jtitle><stitle>TAC</stitle><date>2017-08</date><risdate>2017</risdate><volume>62</volume><issue>8</issue><spage>3788</spage><epage>3803</epage><pages>3788-3803</pages><issn>0018-9286</issn><eissn>1558-2523</eissn><coden>IETAA9</coden><abstract>Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.</abstract><pub>IEEE</pub><doi>10.1109/TAC.2016.2644380</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9286
ispartof IEEE transactions on automatic control, 2017-08, Vol.62 (8), p.3788-3803
issn 0018-9286
1558-2523
language eng
recordid cdi_ieee_primary_7795183
source IEEE Electronic Library (IEL)
subjects Algorithm design and analysis
Context
Decision making
Face
Linear programming
Multi-armed bandit
Robustness
upper credible limit (UCL)
title Satisficing in Multi-Armed Bandit Problems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T01%3A19%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Satisficing%20in%20Multi-Armed%20Bandit%20Problems&rft.jtitle=IEEE%20transactions%20on%20automatic%20control&rft.au=Reverdy,%20Paul&rft.date=2017-08&rft.volume=62&rft.issue=8&rft.spage=3788&rft.epage=3803&rft.pages=3788-3803&rft.issn=0018-9286&rft.eissn=1558-2523&rft.coden=IETAA9&rft_id=info:doi/10.1109/TAC.2016.2644380&rft_dat=%3Ccrossref_RIE%3E10_1109_TAC_2016_2644380%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=7795183&rfr_iscdi=true