Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics
Achieving convergence of multiple learning agents in general $N$-player games is imperative for the development of safe and reliable machine learning (ML) algorithms and their application to autonomous systems. Yet it is known that, outside the bounds of simple two-player games, convergence cannot b...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Hussain, Aamal Abbas Belardinelli, Francesco Piliouras, Georgios |
description | Achieving convergence of multiple learning agents in general $N$-player games
is imperative for the development of safe and reliable machine learning (ML)
algorithms and their application to autonomous systems. Yet it is known that,
outside the bounds of simple two-player games, convergence cannot be taken for
granted.
To make progress in resolving this problem, we study the dynamics of smooth
Q-Learning, a popular reinforcement learning algorithm which quantifies the
tendency for learning agents to explore their state space or exploit their
payoffs. We show a sufficient condition on the rate of exploration such that
the Q-Learning dynamics is guaranteed to converge to a unique equilibrium in
any game. We connect this result to games for which Q-Learning is known to
converge with arbitrary exploration rates, including weighted Potential games
and weighted zero sum polymatrix games.
Finally, we examine the performance of the Q-Learning dynamic as measured by
the Time Averaged Social Welfare, and comparing this with the Social Welfare
achieved by the equilibrium. We provide a sufficient condition whereby the
Q-Learning dynamic will outperform the equilibrium even if the dynamics do not
converge. |
doi_str_mv | 10.48550/arxiv.2301.09619 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2301_09619</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2301_09619</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-a2322ef4c429c193846f1c3b7314d738ebe3c214a8bbf49ffa6b4d07fa7d81513</originalsourceid><addsrcrecordid>eNotz91KwzAcBfDceCHTB_DKvEBqvtokl6U6HVTcYPflnzQZgTUdaTfs2-umV4fDgQM_hJ4YLaQuS_oC-TteCi4oK6ipmLlHm3pahtM8ztHhZkwXnw8-OY8h9XjrcxjzANc-Bvx5Ps6R1L_7jHek9ZBTTAf8uiQYopse0F2A4-Qf_3OF9uu3ffNB2q_3TVO3BCplCHDBuQ_SSW4cM0LLKjAnrBJM9kpob71wnEnQ1gZpQoDKyp6qAKrXrGRihZ7_bm-W7pTjAHnprqbuZhI_49lHAw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics</title><source>arXiv.org</source><creator>Hussain, Aamal Abbas ; Belardinelli, Francesco ; Piliouras, Georgios</creator><creatorcontrib>Hussain, Aamal Abbas ; Belardinelli, Francesco ; Piliouras, Georgios</creatorcontrib><description>Achieving convergence of multiple learning agents in general $N$-player games
is imperative for the development of safe and reliable machine learning (ML)
algorithms and their application to autonomous systems. Yet it is known that,
outside the bounds of simple two-player games, convergence cannot be taken for
granted.
To make progress in resolving this problem, we study the dynamics of smooth
Q-Learning, a popular reinforcement learning algorithm which quantifies the
tendency for learning agents to explore their state space or exploit their
payoffs. We show a sufficient condition on the rate of exploration such that
the Q-Learning dynamics is guaranteed to converge to a unique equilibrium in
any game. We connect this result to games for which Q-Learning is known to
converge with arbitrary exploration rates, including weighted Potential games
and weighted zero sum polymatrix games.
Finally, we examine the performance of the Q-Learning dynamic as measured by
the Time Averaged Social Welfare, and comparing this with the Social Welfare
achieved by the equilibrium. We provide a sufficient condition whereby the
Q-Learning dynamic will outperform the equilibrium even if the dynamics do not
converge.</description><identifier>DOI: 10.48550/arxiv.2301.09619</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Science and Game Theory ; Computer Science - Multiagent Systems ; Mathematics - Dynamical Systems</subject><creationdate>2023-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2301.09619$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2301.09619$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hussain, Aamal Abbas</creatorcontrib><creatorcontrib>Belardinelli, Francesco</creatorcontrib><creatorcontrib>Piliouras, Georgios</creatorcontrib><title>Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics</title><description>Achieving convergence of multiple learning agents in general $N$-player games
is imperative for the development of safe and reliable machine learning (ML)
algorithms and their application to autonomous systems. Yet it is known that,
outside the bounds of simple two-player games, convergence cannot be taken for
granted.
To make progress in resolving this problem, we study the dynamics of smooth
Q-Learning, a popular reinforcement learning algorithm which quantifies the
tendency for learning agents to explore their state space or exploit their
payoffs. We show a sufficient condition on the rate of exploration such that
the Q-Learning dynamics is guaranteed to converge to a unique equilibrium in
any game. We connect this result to games for which Q-Learning is known to
converge with arbitrary exploration rates, including weighted Potential games
and weighted zero sum polymatrix games.
Finally, we examine the performance of the Q-Learning dynamic as measured by
the Time Averaged Social Welfare, and comparing this with the Social Welfare
achieved by the equilibrium. We provide a sufficient condition whereby the
Q-Learning dynamic will outperform the equilibrium even if the dynamics do not
converge.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Science and Game Theory</subject><subject>Computer Science - Multiagent Systems</subject><subject>Mathematics - Dynamical Systems</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz91KwzAcBfDceCHTB_DKvEBqvtokl6U6HVTcYPflnzQZgTUdaTfs2-umV4fDgQM_hJ4YLaQuS_oC-TteCi4oK6ipmLlHm3pahtM8ztHhZkwXnw8-OY8h9XjrcxjzANc-Bvx5Ps6R1L_7jHek9ZBTTAf8uiQYopse0F2A4-Qf_3OF9uu3ffNB2q_3TVO3BCplCHDBuQ_SSW4cM0LLKjAnrBJM9kpob71wnEnQ1gZpQoDKyp6qAKrXrGRihZ7_bm-W7pTjAHnprqbuZhI_49lHAw</recordid><startdate>20230123</startdate><enddate>20230123</enddate><creator>Hussain, Aamal Abbas</creator><creator>Belardinelli, Francesco</creator><creator>Piliouras, Georgios</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20230123</creationdate><title>Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics</title><author>Hussain, Aamal Abbas ; Belardinelli, Francesco ; Piliouras, Georgios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-a2322ef4c429c193846f1c3b7314d738ebe3c214a8bbf49ffa6b4d07fa7d81513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Science and Game Theory</topic><topic>Computer Science - Multiagent Systems</topic><topic>Mathematics - Dynamical Systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Hussain, Aamal Abbas</creatorcontrib><creatorcontrib>Belardinelli, Francesco</creatorcontrib><creatorcontrib>Piliouras, Georgios</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hussain, Aamal Abbas</au><au>Belardinelli, Francesco</au><au>Piliouras, Georgios</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics</atitle><date>2023-01-23</date><risdate>2023</risdate><abstract>Achieving convergence of multiple learning agents in general $N$-player games
is imperative for the development of safe and reliable machine learning (ML)
algorithms and their application to autonomous systems. Yet it is known that,
outside the bounds of simple two-player games, convergence cannot be taken for
granted.
To make progress in resolving this problem, we study the dynamics of smooth
Q-Learning, a popular reinforcement learning algorithm which quantifies the
tendency for learning agents to explore their state space or exploit their
payoffs. We show a sufficient condition on the rate of exploration such that
the Q-Learning dynamics is guaranteed to converge to a unique equilibrium in
any game. We connect this result to games for which Q-Learning is known to
converge with arbitrary exploration rates, including weighted Potential games
and weighted zero sum polymatrix games.
Finally, we examine the performance of the Q-Learning dynamic as measured by
the Time Averaged Social Welfare, and comparing this with the Social Welfare
achieved by the equilibrium. We provide a sufficient condition whereby the
Q-Learning dynamic will outperform the equilibrium even if the dynamics do not
converge.</abstract><doi>10.48550/arxiv.2301.09619</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2301.09619 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2301_09619 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Science and Game Theory Computer Science - Multiagent Systems Mathematics - Dynamical Systems |
title | Asymptotic Convergence and Performance of Multi-Agent Q-Learning Dynamics |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T23%3A45%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Asymptotic%20Convergence%20and%20Performance%20of%20Multi-Agent%20Q-Learning%20Dynamics&rft.au=Hussain,%20Aamal%20Abbas&rft.date=2023-01-23&rft_id=info:doi/10.48550/arxiv.2301.09619&rft_dat=%3Carxiv_GOX%3E2301_09619%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |