Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning

Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chang, Yuchao, Chen, Wen, Li, Jun, Liu, Jianpo, Wei, Haoran, Wang, Zhendong, Al-Dhahir, Naofal
Format: Artikel
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chang, Yuchao
Chen, Wen
Li, Jun
Liu, Jianpo
Wei, Haoran
Wang, Zhendong
Al-Dhahir, Naofal
description Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, an optimization problem is formulated by collaboratively managing multi-BSs radiated power levels with constraints on the users traffic volume and achievable rate. Considering stochastic traffic arrivals at the users and time-varying network interference, we first formulate the problem as a Markov decision process (MDP) and then develop a novel deep reinforcement learning (DRL) framework based on the cloud-RAN operation scheme. To tackle the trade-off between complexity and performance, the overall optimization of multi-BSs energy efficiency with the multiplicative complexity constraint is modeled to achieve nearoptimal performance by using a deep Q-network (DQN). In DQN,each BS first maximizes its individual energy efficiency, and then cooperates with other BSs to maximize the overall multiBSs energy efficiency. Simulation results demonstrate that the proposed algorithm can converge faster and enjoy a network energy efficiency improvement by 5% and 10% compared with the benchmarks of the Q-learning and sleep schemes, respectively.
doi_str_mv 10.48550/arxiv.2304.07976
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_07976</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_07976</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-67d01259259a7c90adb996128dc756ed19c6ee0d772bba03e9f568eadacd16c73</originalsourceid><addsrcrecordid>eNotj0tuwjAURT3poKJdQEf1BpLaCbbjIU2_UmgRMI9e7BdqNdjICdDunhQqXekO7kc6hNxxlk4LIdgDxB93SLOcTVOmtJLX5KsMXQdNiDC4A9L5vhtc8riii3DESOfgYYNb9ANtQ6RP6HukS7Au0Jkx2Pf0A4djiN903zu_GQu4o0t0fmyby65CiH7MbshVC12Pt_8-IeuX53X5llSfr-_lrEpAKplIZRnPhB4FymgGttFa8qywRgmJlmsjEZlVKmsaYDnqVsgCwYKxXBqVT8j95faMWu-i20L8rf-Q6zNyfgKaAVKU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning</title><source>arXiv.org</source><creator>Chang, Yuchao ; Chen, Wen ; Li, Jun ; Liu, Jianpo ; Wei, Haoran ; Wang, Zhendong ; Al-Dhahir, Naofal</creator><creatorcontrib>Chang, Yuchao ; Chen, Wen ; Li, Jun ; Liu, Jianpo ; Wei, Haoran ; Wang, Zhendong ; Al-Dhahir, Naofal</creatorcontrib><description>Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, an optimization problem is formulated by collaboratively managing multi-BSs radiated power levels with constraints on the users traffic volume and achievable rate. Considering stochastic traffic arrivals at the users and time-varying network interference, we first formulate the problem as a Markov decision process (MDP) and then develop a novel deep reinforcement learning (DRL) framework based on the cloud-RAN operation scheme. To tackle the trade-off between complexity and performance, the overall optimization of multi-BSs energy efficiency with the multiplicative complexity constraint is modeled to achieve nearoptimal performance by using a deep Q-network (DQN). In DQN,each BS first maximizes its individual energy efficiency, and then cooperates with other BSs to maximize the overall multiBSs energy efficiency. Simulation results demonstrate that the proposed algorithm can converge faster and enjoy a network energy efficiency improvement by 5% and 10% compared with the benchmarks of the Q-learning and sleep schemes, respectively.</description><identifier>DOI: 10.48550/arxiv.2304.07976</identifier><language>eng</language><creationdate>2023-04</creationdate><rights>http://creativecommons.org/publicdomain/zero/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.07976$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.07976$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chang, Yuchao</creatorcontrib><creatorcontrib>Chen, Wen</creatorcontrib><creatorcontrib>Li, Jun</creatorcontrib><creatorcontrib>Liu, Jianpo</creatorcontrib><creatorcontrib>Wei, Haoran</creatorcontrib><creatorcontrib>Wang, Zhendong</creatorcontrib><creatorcontrib>Al-Dhahir, Naofal</creatorcontrib><title>Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning</title><description>Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, an optimization problem is formulated by collaboratively managing multi-BSs radiated power levels with constraints on the users traffic volume and achievable rate. Considering stochastic traffic arrivals at the users and time-varying network interference, we first formulate the problem as a Markov decision process (MDP) and then develop a novel deep reinforcement learning (DRL) framework based on the cloud-RAN operation scheme. To tackle the trade-off between complexity and performance, the overall optimization of multi-BSs energy efficiency with the multiplicative complexity constraint is modeled to achieve nearoptimal performance by using a deep Q-network (DQN). In DQN,each BS first maximizes its individual energy efficiency, and then cooperates with other BSs to maximize the overall multiBSs energy efficiency. Simulation results demonstrate that the proposed algorithm can converge faster and enjoy a network energy efficiency improvement by 5% and 10% compared with the benchmarks of the Q-learning and sleep schemes, respectively.</description><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0tuwjAURT3poKJdQEf1BpLaCbbjIU2_UmgRMI9e7BdqNdjICdDunhQqXekO7kc6hNxxlk4LIdgDxB93SLOcTVOmtJLX5KsMXQdNiDC4A9L5vhtc8riii3DESOfgYYNb9ANtQ6RP6HukS7Au0Jkx2Pf0A4djiN903zu_GQu4o0t0fmyby65CiH7MbshVC12Pt_8-IeuX53X5llSfr-_lrEpAKplIZRnPhB4FymgGttFa8qywRgmJlmsjEZlVKmsaYDnqVsgCwYKxXBqVT8j95faMWu-i20L8rf-Q6zNyfgKaAVKU</recordid><startdate>20230416</startdate><enddate>20230416</enddate><creator>Chang, Yuchao</creator><creator>Chen, Wen</creator><creator>Li, Jun</creator><creator>Liu, Jianpo</creator><creator>Wei, Haoran</creator><creator>Wang, Zhendong</creator><creator>Al-Dhahir, Naofal</creator><scope>GOX</scope></search><sort><creationdate>20230416</creationdate><title>Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning</title><author>Chang, Yuchao ; Chen, Wen ; Li, Jun ; Liu, Jianpo ; Wei, Haoran ; Wang, Zhendong ; Al-Dhahir, Naofal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-67d01259259a7c90adb996128dc756ed19c6ee0d772bba03e9f568eadacd16c73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Chang, Yuchao</creatorcontrib><creatorcontrib>Chen, Wen</creatorcontrib><creatorcontrib>Li, Jun</creatorcontrib><creatorcontrib>Liu, Jianpo</creatorcontrib><creatorcontrib>Wei, Haoran</creatorcontrib><creatorcontrib>Wang, Zhendong</creatorcontrib><creatorcontrib>Al-Dhahir, Naofal</creatorcontrib><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chang, Yuchao</au><au>Chen, Wen</au><au>Li, Jun</au><au>Liu, Jianpo</au><au>Wei, Haoran</au><au>Wang, Zhendong</au><au>Al-Dhahir, Naofal</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning</atitle><date>2023-04-16</date><risdate>2023</risdate><abstract>Network energy efficiency is a main pillar in the design and operation of wireless communication systems. In this paper, we investigate a dense radio access network (dense-RAN) capable of radiated power management at the base station (BS). Aiming to improve the long-term network energy efficiency, an optimization problem is formulated by collaboratively managing multi-BSs radiated power levels with constraints on the users traffic volume and achievable rate. Considering stochastic traffic arrivals at the users and time-varying network interference, we first formulate the problem as a Markov decision process (MDP) and then develop a novel deep reinforcement learning (DRL) framework based on the cloud-RAN operation scheme. To tackle the trade-off between complexity and performance, the overall optimization of multi-BSs energy efficiency with the multiplicative complexity constraint is modeled to achieve nearoptimal performance by using a deep Q-network (DQN). In DQN,each BS first maximizes its individual energy efficiency, and then cooperates with other BSs to maximize the overall multiBSs energy efficiency. Simulation results demonstrate that the proposed algorithm can converge faster and enjoy a network energy efficiency improvement by 5% and 10% compared with the benchmarks of the Q-learning and sleep schemes, respectively.</abstract><doi>10.48550/arxiv.2304.07976</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.07976
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_07976
source arXiv.org
title Collaborative Multi-BS Power Management for Dense Radio Access Network using Deep Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T12%3A56%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Collaborative%20Multi-BS%20Power%20Management%20for%20Dense%20Radio%20Access%20Network%20using%20Deep%20Reinforcement%20Learning&rft.au=Chang,%20Yuchao&rft.date=2023-04-16&rft_id=info:doi/10.48550/arxiv.2304.07976&rft_dat=%3Carxiv_GOX%3E2304_07976%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true