Learning in Congestion Games with Bandit Feedback
In this paper, we investigate Nash-regret minimization in congestion games, a class of games with benign theoretical structure and broad real-world applications. We first propose a centralized algorithm based on the optimism in the face of uncertainty principle for congestion games with (semi-)bandi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Cui, Qiwen Xiong, Zhihan Fazel, Maryam Du, Simon S |
description | In this paper, we investigate Nash-regret minimization in congestion games, a
class of games with benign theoretical structure and broad real-world
applications. We first propose a centralized algorithm based on the optimism in
the face of uncertainty principle for congestion games with (semi-)bandit
feedback, and obtain finite-sample guarantees. Then we propose a decentralized
algorithm via a novel combination of the Frank-Wolfe method and G-optimal
design. By exploiting the structure of the congestion game, we show the sample
complexity of both algorithms depends only polynomially on the number of
players and the number of facilities, but not the size of the action set, which
can be exponentially large in terms of the number of facilities. We further
define a new problem class, Markov congestion games, which allows us to model
the non-stationarity in congestion games. We propose a centralized algorithm
for Markov congestion games, whose sample complexity again has only polynomial
dependence on all relevant problem parameters, but not the size of the action
set. |
doi_str_mv | 10.48550/arxiv.2206.01880 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_01880</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_01880</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-9119dca7f4b8c864c386636321a4901d778d880218e08c727ed3d5a734b7f8e03</originalsourceid><addsrcrecordid>eNotzrmOwjAUhWE3FIjhAajwCyR4i31TDhGbFImGPrqxHbBmMCiJWN6etTrSXxx9hEw4SxVkGZthewuXVAimU8YB2JDw0mMbQ9zTEGlxinvf9eEU6QqPvqPX0B_oHKMLPV1672q0fz9k0OB_58ffHZHdcrEr1km5XW2K3zJBbViSc547i6ZRNVjQykrQWmopOKqccWcMuCdAcPAMrBHGO-kyNFLVpnk2OSLTz-3bXJ3bcMT2Xr3s1dsuH9q4PLU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning in Congestion Games with Bandit Feedback</title><source>arXiv.org</source><creator>Cui, Qiwen ; Xiong, Zhihan ; Fazel, Maryam ; Du, Simon S</creator><creatorcontrib>Cui, Qiwen ; Xiong, Zhihan ; Fazel, Maryam ; Du, Simon S</creatorcontrib><description>In this paper, we investigate Nash-regret minimization in congestion games, a
class of games with benign theoretical structure and broad real-world
applications. We first propose a centralized algorithm based on the optimism in
the face of uncertainty principle for congestion games with (semi-)bandit
feedback, and obtain finite-sample guarantees. Then we propose a decentralized
algorithm via a novel combination of the Frank-Wolfe method and G-optimal
design. By exploiting the structure of the congestion game, we show the sample
complexity of both algorithms depends only polynomially on the number of
players and the number of facilities, but not the size of the action set, which
can be exponentially large in terms of the number of facilities. We further
define a new problem class, Markov congestion games, which allows us to model
the non-stationarity in congestion games. We propose a centralized algorithm
for Markov congestion games, whose sample complexity again has only polynomial
dependence on all relevant problem parameters, but not the size of the action
set.</description><identifier>DOI: 10.48550/arxiv.2206.01880</identifier><language>eng</language><subject>Computer Science - Computer Science and Game Theory ; Computer Science - Learning ; Computer Science - Multiagent Systems ; Statistics - Machine Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.01880$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.01880$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cui, Qiwen</creatorcontrib><creatorcontrib>Xiong, Zhihan</creatorcontrib><creatorcontrib>Fazel, Maryam</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><title>Learning in Congestion Games with Bandit Feedback</title><description>In this paper, we investigate Nash-regret minimization in congestion games, a
class of games with benign theoretical structure and broad real-world
applications. We first propose a centralized algorithm based on the optimism in
the face of uncertainty principle for congestion games with (semi-)bandit
feedback, and obtain finite-sample guarantees. Then we propose a decentralized
algorithm via a novel combination of the Frank-Wolfe method and G-optimal
design. By exploiting the structure of the congestion game, we show the sample
complexity of both algorithms depends only polynomially on the number of
players and the number of facilities, but not the size of the action set, which
can be exponentially large in terms of the number of facilities. We further
define a new problem class, Markov congestion games, which allows us to model
the non-stationarity in congestion games. We propose a centralized algorithm
for Markov congestion games, whose sample complexity again has only polynomial
dependence on all relevant problem parameters, but not the size of the action
set.</description><subject>Computer Science - Computer Science and Game Theory</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Multiagent Systems</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrmOwjAUhWE3FIjhAajwCyR4i31TDhGbFImGPrqxHbBmMCiJWN6etTrSXxx9hEw4SxVkGZthewuXVAimU8YB2JDw0mMbQ9zTEGlxinvf9eEU6QqPvqPX0B_oHKMLPV1672q0fz9k0OB_58ffHZHdcrEr1km5XW2K3zJBbViSc547i6ZRNVjQykrQWmopOKqccWcMuCdAcPAMrBHGO-kyNFLVpnk2OSLTz-3bXJ3bcMT2Xr3s1dsuH9q4PLU</recordid><startdate>20220603</startdate><enddate>20220603</enddate><creator>Cui, Qiwen</creator><creator>Xiong, Zhihan</creator><creator>Fazel, Maryam</creator><creator>Du, Simon S</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20220603</creationdate><title>Learning in Congestion Games with Bandit Feedback</title><author>Cui, Qiwen ; Xiong, Zhihan ; Fazel, Maryam ; Du, Simon S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-9119dca7f4b8c864c386636321a4901d778d880218e08c727ed3d5a734b7f8e03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Science and Game Theory</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Multiagent Systems</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Cui, Qiwen</creatorcontrib><creatorcontrib>Xiong, Zhihan</creatorcontrib><creatorcontrib>Fazel, Maryam</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cui, Qiwen</au><au>Xiong, Zhihan</au><au>Fazel, Maryam</au><au>Du, Simon S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning in Congestion Games with Bandit Feedback</atitle><date>2022-06-03</date><risdate>2022</risdate><abstract>In this paper, we investigate Nash-regret minimization in congestion games, a
class of games with benign theoretical structure and broad real-world
applications. We first propose a centralized algorithm based on the optimism in
the face of uncertainty principle for congestion games with (semi-)bandit
feedback, and obtain finite-sample guarantees. Then we propose a decentralized
algorithm via a novel combination of the Frank-Wolfe method and G-optimal
design. By exploiting the structure of the congestion game, we show the sample
complexity of both algorithms depends only polynomially on the number of
players and the number of facilities, but not the size of the action set, which
can be exponentially large in terms of the number of facilities. We further
define a new problem class, Markov congestion games, which allows us to model
the non-stationarity in congestion games. We propose a centralized algorithm
for Markov congestion games, whose sample complexity again has only polynomial
dependence on all relevant problem parameters, but not the size of the action
set.</abstract><doi>10.48550/arxiv.2206.01880</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2206.01880 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2206_01880 |
source | arXiv.org |
subjects | Computer Science - Computer Science and Game Theory Computer Science - Learning Computer Science - Multiagent Systems Statistics - Machine Learning |
title | Learning in Congestion Games with Bandit Feedback |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T05%3A21%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20in%20Congestion%20Games%20with%20Bandit%20Feedback&rft.au=Cui,%20Qiwen&rft.date=2022-06-03&rft_id=info:doi/10.48550/arxiv.2206.01880&rft_dat=%3Carxiv_GOX%3E2206_01880%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |