Distributed Learning in Multi-Armed Bandit With Multiple Players

We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing 2010-11, Vol.58 (11), p.5667-5681
Hauptverfasser: Liu, Keqin, Zhao, Qing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5681
container_issue 11
container_start_page 5667
container_title IEEE transactions on signal processing
container_volume 58
creator Liu, Keqin
Zhao, Qing
description We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a time-division fair sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.
doi_str_mv 10.1109/TSP.2010.2062509
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_831152942</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5535151</ieee_id><sourcerecordid>831152942</sourcerecordid><originalsourceid>FETCH-LOGICAL-c400t-994ea3b71e846d779876eed05060542d8da0732fe145c972c956c808bae9e41d3</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhoMoWKt3wUtAxFPq7Fc2e7PWT6hYsKK3sN1MdEua1N3k0H_vlpQePO3szDMvwxNF5wRGhIC6mb_PRhTCj0JKBaiDaEAUJwlwmR6GGgRLRCa_jqMT75cAhHOVDqLbe-tbZxddi0U8Re1qW3_Hto5fu6q1yditQv9O14Vt40_b_vT9dYXxrNIbdP40Oip15fFs9w6jj8eH-eQ5mb49vUzG08RwgDZRiqNmC0kw42khpcpkiliAgBQEp0VWaJCMlki4MEpSo0RqMsgWGhVyUrBhdN3nrl3z26Fv85X1BqtK19h0Ps8YIYIqTgN5-Y9cNp2rw3E5AQaEBTs8UNBTxjXeOyzztbMr7TYByrdG82A03xrNd0bDytUuWHujq9Lp2li_36OMUZnKbfRFz1lE3I-FYIIIwv4AlLB8bg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1030135094</pqid></control><display><type>article</type><title>Distributed Learning in Multi-Armed Bandit With Multiple Players</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Keqin ; Zhao, Qing</creator><creatorcontrib>Liu, Keqin ; Zhao, Qing</creatorcontrib><description>We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a time-division fair sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.</description><identifier>ISSN: 1053-587X</identifier><identifier>EISSN: 1941-0476</identifier><identifier>DOI: 10.1109/TSP.2010.2062509</identifier><identifier>CODEN: ITPRED</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Advertising ; Applied sciences ; Arm ; Cognitive radio ; Construction ; Decentralized ; decentralized multi-armed bandit ; distributed learning ; Exact sciences and technology ; Exchanging ; Information, signal and communications theory ; Laboratories ; Miscellaneous ; multi-agent systems ; Networks ; Optimization ; Permission ; Players ; Policies ; Radio access networks ; Radio networks ; Searching ; Signal processing ; Social network services ; Studies ; system regret ; TDF ; Telecommunications and information theory ; USA Councils ; Web search ; Web search and advertising</subject><ispartof>IEEE transactions on signal processing, 2010-11, Vol.58 (11), p.5667-5681</ispartof><rights>2015 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Nov 2010</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c400t-994ea3b71e846d779876eed05060542d8da0732fe145c972c956c808bae9e41d3</citedby><cites>FETCH-LOGICAL-c400t-994ea3b71e846d779876eed05060542d8da0732fe145c972c956c808bae9e41d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5535151$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5535151$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=23327674$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Keqin</creatorcontrib><creatorcontrib>Zhao, Qing</creatorcontrib><title>Distributed Learning in Multi-Armed Bandit With Multiple Players</title><title>IEEE transactions on signal processing</title><addtitle>TSP</addtitle><description>We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a time-division fair sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.</description><subject>Advertising</subject><subject>Applied sciences</subject><subject>Arm</subject><subject>Cognitive radio</subject><subject>Construction</subject><subject>Decentralized</subject><subject>decentralized multi-armed bandit</subject><subject>distributed learning</subject><subject>Exact sciences and technology</subject><subject>Exchanging</subject><subject>Information, signal and communications theory</subject><subject>Laboratories</subject><subject>Miscellaneous</subject><subject>multi-agent systems</subject><subject>Networks</subject><subject>Optimization</subject><subject>Permission</subject><subject>Players</subject><subject>Policies</subject><subject>Radio access networks</subject><subject>Radio networks</subject><subject>Searching</subject><subject>Signal processing</subject><subject>Social network services</subject><subject>Studies</subject><subject>system regret</subject><subject>TDF</subject><subject>Telecommunications and information theory</subject><subject>USA Councils</subject><subject>Web search</subject><subject>Web search and advertising</subject><issn>1053-587X</issn><issn>1941-0476</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2010</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1Lw0AQhoMoWKt3wUtAxFPq7Fc2e7PWT6hYsKK3sN1MdEua1N3k0H_vlpQePO3szDMvwxNF5wRGhIC6mb_PRhTCj0JKBaiDaEAUJwlwmR6GGgRLRCa_jqMT75cAhHOVDqLbe-tbZxddi0U8Re1qW3_Hto5fu6q1yditQv9O14Vt40_b_vT9dYXxrNIbdP40Oip15fFs9w6jj8eH-eQ5mb49vUzG08RwgDZRiqNmC0kw42khpcpkiliAgBQEp0VWaJCMlki4MEpSo0RqMsgWGhVyUrBhdN3nrl3z26Fv85X1BqtK19h0Ps8YIYIqTgN5-Y9cNp2rw3E5AQaEBTs8UNBTxjXeOyzztbMr7TYByrdG82A03xrNd0bDytUuWHujq9Lp2li_36OMUZnKbfRFz1lE3I-FYIIIwv4AlLB8bg</recordid><startdate>20101101</startdate><enddate>20101101</enddate><creator>Liu, Keqin</creator><creator>Zhao, Qing</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>20101101</creationdate><title>Distributed Learning in Multi-Armed Bandit With Multiple Players</title><author>Liu, Keqin ; Zhao, Qing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c400t-994ea3b71e846d779876eed05060542d8da0732fe145c972c956c808bae9e41d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Advertising</topic><topic>Applied sciences</topic><topic>Arm</topic><topic>Cognitive radio</topic><topic>Construction</topic><topic>Decentralized</topic><topic>decentralized multi-armed bandit</topic><topic>distributed learning</topic><topic>Exact sciences and technology</topic><topic>Exchanging</topic><topic>Information, signal and communications theory</topic><topic>Laboratories</topic><topic>Miscellaneous</topic><topic>multi-agent systems</topic><topic>Networks</topic><topic>Optimization</topic><topic>Permission</topic><topic>Players</topic><topic>Policies</topic><topic>Radio access networks</topic><topic>Radio networks</topic><topic>Searching</topic><topic>Signal processing</topic><topic>Social network services</topic><topic>Studies</topic><topic>system regret</topic><topic>TDF</topic><topic>Telecommunications and information theory</topic><topic>USA Councils</topic><topic>Web search</topic><topic>Web search and advertising</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Keqin</creatorcontrib><creatorcontrib>Zhao, Qing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Keqin</au><au>Zhao, Qing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distributed Learning in Multi-Armed Bandit With Multiple Players</atitle><jtitle>IEEE transactions on signal processing</jtitle><stitle>TSP</stitle><date>2010-11-01</date><risdate>2010</risdate><volume>58</volume><issue>11</issue><spage>5667</spage><epage>5681</epage><pages>5667-5681</pages><issn>1053-587X</issn><eissn>1941-0476</eissn><coden>ITPRED</coden><abstract>We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a time-division fair sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks.</abstract><cop>New York, NY</cop><pub>IEEE</pub><doi>10.1109/TSP.2010.2062509</doi><tpages>15</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1053-587X
ispartof IEEE transactions on signal processing, 2010-11, Vol.58 (11), p.5667-5681
issn 1053-587X
1941-0476
language eng
recordid cdi_proquest_miscellaneous_831152942
source IEEE Electronic Library (IEL)
subjects Advertising
Applied sciences
Arm
Cognitive radio
Construction
Decentralized
decentralized multi-armed bandit
distributed learning
Exact sciences and technology
Exchanging
Information, signal and communications theory
Laboratories
Miscellaneous
multi-agent systems
Networks
Optimization
Permission
Players
Policies
Radio access networks
Radio networks
Searching
Signal processing
Social network services
Studies
system regret
TDF
Telecommunications and information theory
USA Councils
Web search
Web search and advertising
title Distributed Learning in Multi-Armed Bandit With Multiple Players
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T15%3A26%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distributed%20Learning%20in%20Multi-Armed%20Bandit%20With%20Multiple%20Players&rft.jtitle=IEEE%20transactions%20on%20signal%20processing&rft.au=Liu,%20Keqin&rft.date=2010-11-01&rft.volume=58&rft.issue=11&rft.spage=5667&rft.epage=5681&rft.pages=5667-5681&rft.issn=1053-587X&rft.eissn=1941-0476&rft.coden=ITPRED&rft_id=info:doi/10.1109/TSP.2010.2062509&rft_dat=%3Cproquest_RIE%3E831152942%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1030135094&rft_id=info:pmid/&rft_ieee_id=5535151&rfr_iscdi=true