Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks

In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmission...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on wireless communications 2024-08, Vol.23 (8), p.8868-8882
Hauptverfasser: Han, Mingqi, Sun, Xinghua, Zhan, Wen, Gao, Yayu, Jiang, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8882
container_issue 8
container_start_page 8868
container_title IEEE transactions on wireless communications
container_volume 23
creator Han, Mingqi
Sun, Xinghua
Zhan, Wen
Gao, Yayu
Jiang, Yuan
description In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmissions, which calls for an efficient access mechanism in the OFDMA uplink system. Based on Multi-Agent Proximal Policy Optimization (MAPPO), we propose a Mean-Field Multi-Agent Proximal Policy Optimization (MFMAPPO) algorithm to improve the throughput and guarantee the fairness. Motivated by the Mean-Field games (MFGs) theory, a novel global state and action design are proposed to ensure the convergence of MFMAPPO in the massive access scenario. The Multi-Critic Single-Policy (MCSP) architecture is deployed in the proposed MFMAPPO so that each agent can learn the optimal channel access strategy to improve the throughput while satisfying fairness requirement. Extensive simulation experiments are performed to show that the MFMAPPO algorithm 1) has low computational complexity that increases linearly with respect to the number of stations 2) achieves nearly optimal throughput and fairness performance in the massive access scenario, 3) can adapt to various diverse and dynamic traffic conditions without retraining, as well as the traffic condition different from training traffic.
doi_str_mv 10.1109/TWC.2024.3355276
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10413951</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10413951</ieee_id><sourcerecordid>3092924568</sourcerecordid><originalsourceid>FETCH-LOGICAL-c245t-da5b22951979c58d0afc897fe6b43b1ca20eeb2510b20150710a5b2d48464b513</originalsourceid><addsrcrecordid>eNpNkEtPwkAUhSdGExHdu3AxievinVcfS8SiJCDGQFxOpu0tKY8WZ0rQf-80sHB1H_nOPTeHkHsGA8YgeVp8jQYcuBwIoRSPwgvSY0rFAecyvux6EQbM76_JjXNrABaFSvXIx-ywbatguMK6pZ9Y1WVjc9x10xSNrat6RZ-Nw4Iu99uq3tD5-GU2pJ6ikzRNaQzc-5sf-o7tsbEbd0uuSrN1eHeufbIcp4vRWzCdv05Gw2mQc6naoDAq4zxRLImSXMUFmDKPk6jEMJMiY7nhgJhxxSDjwBREDDpFIWMZykwx0SePp7t723wf0LV63Rxs7S21gIQn3iWMPQUnKreNcxZLvbfVzthfzUB3uWmfm-5y0-fcvOThJKkQ8R8umfDvij8KzWYA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3092924568</pqid></control><display><type>article</type><title>Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Han, Mingqi ; Sun, Xinghua ; Zhan, Wen ; Gao, Yayu ; Jiang, Yuan</creator><creatorcontrib>Han, Mingqi ; Sun, Xinghua ; Zhan, Wen ; Gao, Yayu ; Jiang, Yuan</creatorcontrib><description>In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmissions, which calls for an efficient access mechanism in the OFDMA uplink system. Based on Multi-Agent Proximal Policy Optimization (MAPPO), we propose a Mean-Field Multi-Agent Proximal Policy Optimization (MFMAPPO) algorithm to improve the throughput and guarantee the fairness. Motivated by the Mean-Field games (MFGs) theory, a novel global state and action design are proposed to ensure the convergence of MFMAPPO in the massive access scenario. The Multi-Critic Single-Policy (MCSP) architecture is deployed in the proposed MFMAPPO so that each agent can learn the optimal channel access strategy to improve the throughput while satisfying fairness requirement. Extensive simulation experiments are performed to show that the MFMAPPO algorithm 1) has low computational complexity that increases linearly with respect to the number of stations 2) achieves nearly optimal throughput and fairness performance in the massive access scenario, 3) can adapt to various diverse and dynamic traffic conditions without retraining, as well as the traffic condition different from training traffic.</description><identifier>ISSN: 1536-1276</identifier><identifier>EISSN: 1558-2248</identifier><identifier>DOI: 10.1109/TWC.2024.3355276</identifier><identifier>CODEN: ITWCAX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Computational complexity ; Frequency division multiple access ; Heuristic algorithms ; IEEE 802.11ax Standard ; Local area networks ; mean-field reinforcement learning ; multi-agent reinforcement learning ; multi-objective reinforcement learning ; Multiagent systems ; Multiple access ; Optimization ; Orthogonal Frequency Division Multiplexing ; Sun ; Throughput ; Traffic ; Uplink ; Uplinking ; Wireless networks</subject><ispartof>IEEE transactions on wireless communications, 2024-08, Vol.23 (8), p.8868-8882</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c245t-da5b22951979c58d0afc897fe6b43b1ca20eeb2510b20150710a5b2d48464b513</cites><orcidid>0000-0003-0621-1469 ; 0000-0002-0061-7321 ; 0000-0002-3129-7893 ; 0000-0002-8036-9689 ; 0000-0003-4307-0562</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10413951$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10413951$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Han, Mingqi</creatorcontrib><creatorcontrib>Sun, Xinghua</creatorcontrib><creatorcontrib>Zhan, Wen</creatorcontrib><creatorcontrib>Gao, Yayu</creatorcontrib><creatorcontrib>Jiang, Yuan</creatorcontrib><title>Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks</title><title>IEEE transactions on wireless communications</title><addtitle>TWC</addtitle><description>In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmissions, which calls for an efficient access mechanism in the OFDMA uplink system. Based on Multi-Agent Proximal Policy Optimization (MAPPO), we propose a Mean-Field Multi-Agent Proximal Policy Optimization (MFMAPPO) algorithm to improve the throughput and guarantee the fairness. Motivated by the Mean-Field games (MFGs) theory, a novel global state and action design are proposed to ensure the convergence of MFMAPPO in the massive access scenario. The Multi-Critic Single-Policy (MCSP) architecture is deployed in the proposed MFMAPPO so that each agent can learn the optimal channel access strategy to improve the throughput while satisfying fairness requirement. Extensive simulation experiments are performed to show that the MFMAPPO algorithm 1) has low computational complexity that increases linearly with respect to the number of stations 2) achieves nearly optimal throughput and fairness performance in the massive access scenario, 3) can adapt to various diverse and dynamic traffic conditions without retraining, as well as the traffic condition different from training traffic.</description><subject>Algorithms</subject><subject>Computational complexity</subject><subject>Frequency division multiple access</subject><subject>Heuristic algorithms</subject><subject>IEEE 802.11ax Standard</subject><subject>Local area networks</subject><subject>mean-field reinforcement learning</subject><subject>multi-agent reinforcement learning</subject><subject>multi-objective reinforcement learning</subject><subject>Multiagent systems</subject><subject>Multiple access</subject><subject>Optimization</subject><subject>Orthogonal Frequency Division Multiplexing</subject><subject>Sun</subject><subject>Throughput</subject><subject>Traffic</subject><subject>Uplink</subject><subject>Uplinking</subject><subject>Wireless networks</subject><issn>1536-1276</issn><issn>1558-2248</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEtPwkAUhSdGExHdu3AxievinVcfS8SiJCDGQFxOpu0tKY8WZ0rQf-80sHB1H_nOPTeHkHsGA8YgeVp8jQYcuBwIoRSPwgvSY0rFAecyvux6EQbM76_JjXNrABaFSvXIx-ywbatguMK6pZ9Y1WVjc9x10xSNrat6RZ-Nw4Iu99uq3tD5-GU2pJ6ikzRNaQzc-5sf-o7tsbEbd0uuSrN1eHeufbIcp4vRWzCdv05Gw2mQc6naoDAq4zxRLImSXMUFmDKPk6jEMJMiY7nhgJhxxSDjwBREDDpFIWMZykwx0SePp7t723wf0LV63Rxs7S21gIQn3iWMPQUnKreNcxZLvbfVzthfzUB3uWmfm-5y0-fcvOThJKkQ8R8umfDvij8KzWYA</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Han, Mingqi</creator><creator>Sun, Xinghua</creator><creator>Zhan, Wen</creator><creator>Gao, Yayu</creator><creator>Jiang, Yuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-0621-1469</orcidid><orcidid>https://orcid.org/0000-0002-0061-7321</orcidid><orcidid>https://orcid.org/0000-0002-3129-7893</orcidid><orcidid>https://orcid.org/0000-0002-8036-9689</orcidid><orcidid>https://orcid.org/0000-0003-4307-0562</orcidid></search><sort><creationdate>20240801</creationdate><title>Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks</title><author>Han, Mingqi ; Sun, Xinghua ; Zhan, Wen ; Gao, Yayu ; Jiang, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c245t-da5b22951979c58d0afc897fe6b43b1ca20eeb2510b20150710a5b2d48464b513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Computational complexity</topic><topic>Frequency division multiple access</topic><topic>Heuristic algorithms</topic><topic>IEEE 802.11ax Standard</topic><topic>Local area networks</topic><topic>mean-field reinforcement learning</topic><topic>multi-agent reinforcement learning</topic><topic>multi-objective reinforcement learning</topic><topic>Multiagent systems</topic><topic>Multiple access</topic><topic>Optimization</topic><topic>Orthogonal Frequency Division Multiplexing</topic><topic>Sun</topic><topic>Throughput</topic><topic>Traffic</topic><topic>Uplink</topic><topic>Uplinking</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Han, Mingqi</creatorcontrib><creatorcontrib>Sun, Xinghua</creatorcontrib><creatorcontrib>Zhan, Wen</creatorcontrib><creatorcontrib>Gao, Yayu</creatorcontrib><creatorcontrib>Jiang, Yuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on wireless communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Han, Mingqi</au><au>Sun, Xinghua</au><au>Zhan, Wen</au><au>Gao, Yayu</au><au>Jiang, Yuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks</atitle><jtitle>IEEE transactions on wireless communications</jtitle><stitle>TWC</stitle><date>2024-08-01</date><risdate>2024</risdate><volume>23</volume><issue>8</issue><spage>8868</spage><epage>8882</epage><pages>8868-8882</pages><issn>1536-1276</issn><eissn>1558-2248</eissn><coden>ITWCAX</coden><abstract>In the IEEE 802.11ax Wireless Local Area Networks (WLANs), Orthogonal Frequency Division Multiple Access (OFDMA) has been applied to enable the high-throughput WLAN amendment. However, with the growth of the number of devices, it is difficult for the Access Point (AP) to schedule uplink transmissions, which calls for an efficient access mechanism in the OFDMA uplink system. Based on Multi-Agent Proximal Policy Optimization (MAPPO), we propose a Mean-Field Multi-Agent Proximal Policy Optimization (MFMAPPO) algorithm to improve the throughput and guarantee the fairness. Motivated by the Mean-Field games (MFGs) theory, a novel global state and action design are proposed to ensure the convergence of MFMAPPO in the massive access scenario. The Multi-Critic Single-Policy (MCSP) architecture is deployed in the proposed MFMAPPO so that each agent can learn the optimal channel access strategy to improve the throughput while satisfying fairness requirement. Extensive simulation experiments are performed to show that the MFMAPPO algorithm 1) has low computational complexity that increases linearly with respect to the number of stations 2) achieves nearly optimal throughput and fairness performance in the massive access scenario, 3) can adapt to various diverse and dynamic traffic conditions without retraining, as well as the traffic condition different from training traffic.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TWC.2024.3355276</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-0621-1469</orcidid><orcidid>https://orcid.org/0000-0002-0061-7321</orcidid><orcidid>https://orcid.org/0000-0002-3129-7893</orcidid><orcidid>https://orcid.org/0000-0002-8036-9689</orcidid><orcidid>https://orcid.org/0000-0003-4307-0562</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1536-1276
ispartof IEEE transactions on wireless communications, 2024-08, Vol.23 (8), p.8868-8882
issn 1536-1276
1558-2248
language eng
recordid cdi_ieee_primary_10413951
source IEEE Electronic Library (IEL)
subjects Algorithms
Computational complexity
Frequency division multiple access
Heuristic algorithms
IEEE 802.11ax Standard
Local area networks
mean-field reinforcement learning
multi-agent reinforcement learning
multi-objective reinforcement learning
Multiagent systems
Multiple access
Optimization
Orthogonal Frequency Division Multiplexing
Sun
Throughput
Traffic
Uplink
Uplinking
Wireless networks
title Multi-Agent Reinforcement Learning Based Uplink OFDMA for IEEE 802.11ax Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T21%3A54%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Agent%20Reinforcement%20Learning%20Based%20Uplink%20OFDMA%20for%20IEEE%20802.11ax%20Networks&rft.jtitle=IEEE%20transactions%20on%20wireless%20communications&rft.au=Han,%20Mingqi&rft.date=2024-08-01&rft.volume=23&rft.issue=8&rft.spage=8868&rft.epage=8882&rft.pages=8868-8882&rft.issn=1536-1276&rft.eissn=1558-2248&rft.coden=ITWCAX&rft_id=info:doi/10.1109/TWC.2024.3355276&rft_dat=%3Cproquest_RIE%3E3092924568%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3092924568&rft_id=info:pmid/&rft_ieee_id=10413951&rfr_iscdi=true