Towards Soft Fairness in Restless Multi-Armed Bandits

Restless multi-armed bandits (RMAB) is a framework for allocating limited resources under uncertainty. It is an extremely useful model for monitoring beneficiaries and executing timely interventions to ensure maximum benefit in public health settings (e.g., ensuring patients take medicines in tuberc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Dexun, Varakantham, Pradeep
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Dexun
Varakantham, Pradeep
description Restless multi-armed bandits (RMAB) is a framework for allocating limited resources under uncertainty. It is an extremely useful model for monitoring beneficiaries and executing timely interventions to ensure maximum benefit in public health settings (e.g., ensuring patients take medicines in tuberculosis settings, ensuring pregnant mothers listen to automated calls about good pregnancy practices). Due to the limited resources, typically certain communities or regions are starved of interventions that can have follow-on effects. To avoid starvation in the executed interventions across individuals/regions/communities, we first provide a soft fairness constraint and then provide an approach to enforce the soft fairness constraint in RMABs. The soft fairness constraint requires that an algorithm never probabilistically favor one arm over another if the long-term cumulative reward of choosing the latter arm is higher. Our approach incorporates softmax based value iteration method in the RMAB setting to design selection algorithms that manage to satisfy the proposed fairness constraint. Our method, referred to as SoftFair, also provides theoretical performance guarantees and is asymptotically optimal. Finally, we demonstrate the utility of our approaches on simulated benchmarks and show that the soft fairness constraint can be handled without a significant sacrifice on value.
doi_str_mv 10.48550/arxiv.2207.13343
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2207_13343</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207_13343</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-eb468b6d1bea27e4a569288ffd263dbced7bdbbd14895b52cd2611c84d54cec03</originalsourceid><addsrcrecordid>eNotzstuwjAUBFBvukDQD2CFfyAhfsYsAZWCBEJqs4-ufW8kSyFUdvr6-xba1YxmMTqMzUVVamdMtYT0FT9KKau6FEppNWGmuX5Cwsxfr93IdxDTQDnzOPAXymN_66f3fozFOl0I-QYGjGOesYcO-kyP_zllze6p2e6L4_n5sF0fC7C1Kshr67xF4QlkTRqMXUnnug6lVegDYe3RexTarYw3MvzuQgSn0ehAoVJTtvi7vbvbtxQvkL7bm7-9-9UPu3RAXQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Soft Fairness in Restless Multi-Armed Bandits</title><source>arXiv.org</source><creator>Li, Dexun ; Varakantham, Pradeep</creator><creatorcontrib>Li, Dexun ; Varakantham, Pradeep</creatorcontrib><description>Restless multi-armed bandits (RMAB) is a framework for allocating limited resources under uncertainty. It is an extremely useful model for monitoring beneficiaries and executing timely interventions to ensure maximum benefit in public health settings (e.g., ensuring patients take medicines in tuberculosis settings, ensuring pregnant mothers listen to automated calls about good pregnancy practices). Due to the limited resources, typically certain communities or regions are starved of interventions that can have follow-on effects. To avoid starvation in the executed interventions across individuals/regions/communities, we first provide a soft fairness constraint and then provide an approach to enforce the soft fairness constraint in RMABs. The soft fairness constraint requires that an algorithm never probabilistically favor one arm over another if the long-term cumulative reward of choosing the latter arm is higher. Our approach incorporates softmax based value iteration method in the RMAB setting to design selection algorithms that manage to satisfy the proposed fairness constraint. Our method, referred to as SoftFair, also provides theoretical performance guarantees and is asymptotically optimal. Finally, we demonstrate the utility of our approaches on simulated benchmarks and show that the soft fairness constraint can be handled without a significant sacrifice on value.</description><identifier>DOI: 10.48550/arxiv.2207.13343</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computers and Society ; Computer Science - Learning</subject><creationdate>2022-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2207.13343$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.13343$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Dexun</creatorcontrib><creatorcontrib>Varakantham, Pradeep</creatorcontrib><title>Towards Soft Fairness in Restless Multi-Armed Bandits</title><description>Restless multi-armed bandits (RMAB) is a framework for allocating limited resources under uncertainty. It is an extremely useful model for monitoring beneficiaries and executing timely interventions to ensure maximum benefit in public health settings (e.g., ensuring patients take medicines in tuberculosis settings, ensuring pregnant mothers listen to automated calls about good pregnancy practices). Due to the limited resources, typically certain communities or regions are starved of interventions that can have follow-on effects. To avoid starvation in the executed interventions across individuals/regions/communities, we first provide a soft fairness constraint and then provide an approach to enforce the soft fairness constraint in RMABs. The soft fairness constraint requires that an algorithm never probabilistically favor one arm over another if the long-term cumulative reward of choosing the latter arm is higher. Our approach incorporates softmax based value iteration method in the RMAB setting to design selection algorithms that manage to satisfy the proposed fairness constraint. Our method, referred to as SoftFair, also provides theoretical performance guarantees and is asymptotically optimal. Finally, we demonstrate the utility of our approaches on simulated benchmarks and show that the soft fairness constraint can be handled without a significant sacrifice on value.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzstuwjAUBFBvukDQD2CFfyAhfsYsAZWCBEJqs4-ufW8kSyFUdvr6-xba1YxmMTqMzUVVamdMtYT0FT9KKau6FEppNWGmuX5Cwsxfr93IdxDTQDnzOPAXymN_66f3fozFOl0I-QYGjGOesYcO-kyP_zllze6p2e6L4_n5sF0fC7C1Kshr67xF4QlkTRqMXUnnug6lVegDYe3RexTarYw3MvzuQgSn0ehAoVJTtvi7vbvbtxQvkL7bm7-9-9UPu3RAXQ</recordid><startdate>20220727</startdate><enddate>20220727</enddate><creator>Li, Dexun</creator><creator>Varakantham, Pradeep</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220727</creationdate><title>Towards Soft Fairness in Restless Multi-Armed Bandits</title><author>Li, Dexun ; Varakantham, Pradeep</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-eb468b6d1bea27e4a569288ffd263dbced7bdbbd14895b52cd2611c84d54cec03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Dexun</creatorcontrib><creatorcontrib>Varakantham, Pradeep</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Dexun</au><au>Varakantham, Pradeep</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Soft Fairness in Restless Multi-Armed Bandits</atitle><date>2022-07-27</date><risdate>2022</risdate><abstract>Restless multi-armed bandits (RMAB) is a framework for allocating limited resources under uncertainty. It is an extremely useful model for monitoring beneficiaries and executing timely interventions to ensure maximum benefit in public health settings (e.g., ensuring patients take medicines in tuberculosis settings, ensuring pregnant mothers listen to automated calls about good pregnancy practices). Due to the limited resources, typically certain communities or regions are starved of interventions that can have follow-on effects. To avoid starvation in the executed interventions across individuals/regions/communities, we first provide a soft fairness constraint and then provide an approach to enforce the soft fairness constraint in RMABs. The soft fairness constraint requires that an algorithm never probabilistically favor one arm over another if the long-term cumulative reward of choosing the latter arm is higher. Our approach incorporates softmax based value iteration method in the RMAB setting to design selection algorithms that manage to satisfy the proposed fairness constraint. Our method, referred to as SoftFair, also provides theoretical performance guarantees and is asymptotically optimal. Finally, we demonstrate the utility of our approaches on simulated benchmarks and show that the soft fairness constraint can be handled without a significant sacrifice on value.</abstract><doi>10.48550/arxiv.2207.13343</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2207.13343
ispartof
issn
language eng
recordid cdi_arxiv_primary_2207_13343
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computers and Society
Computer Science - Learning
title Towards Soft Fairness in Restless Multi-Armed Bandits
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T15%3A10%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Soft%20Fairness%20in%20Restless%20Multi-Armed%20Bandits&rft.au=Li,%20Dexun&rft.date=2022-07-27&rft_id=info:doi/10.48550/arxiv.2207.13343&rft_dat=%3Carxiv_GOX%3E2207_13343%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true