Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks

In unmanned aerial vehicle (UAV)-assisted communications, there are two significant challenges that need to be addressed-optimized UAV placement and energy-efficient user association. These challenges are crucial in meeting the quality-of-service requirements of users. To overcome these challenges,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on aerospace and electronic systems 2024-04, Vol.60 (2), p.2474-2481
Hauptverfasser: Kaleem, Zeeshan, Khalid, Waqas, Ahmad, Ayaz, Yu, Heejung, Almasoud, Abdullah M., Yuen, Chau
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2481
container_issue 2
container_start_page 2474
container_title IEEE transactions on aerospace and electronic systems
container_volume 60
creator Kaleem, Zeeshan
Khalid, Waqas
Ahmad, Ayaz
Yu, Heejung
Almasoud, Abdullah M.
Yuen, Chau
description In unmanned aerial vehicle (UAV)-assisted communications, there are two significant challenges that need to be addressed-optimized UAV placement and energy-efficient user association. These challenges are crucial in meeting the quality-of-service requirements of users. To overcome these challenges, a reinforcement-learning-based intelligent solution is proposed along with a reward function that associates users with UAVs in an intelligent manner. This solution aims to improve the system's sum rate performance by consuming less energy. Simulation results are presented to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed approach is more energy efficient than the benchmark scheme while improving the system's sum rate.
doi_str_mv 10.1109/TAES.2024.3353724
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TAES_2024_3353724</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10409538</ieee_id><sourcerecordid>3037645594</sourcerecordid><originalsourceid>FETCH-LOGICAL-c246t-98bb9094db1003f8f115611af03b780a0f09e10dbc9b2621177a0fa017bb124f3</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsFb_AMFDwHPqzH4k2WMo9QOKgrYeDbvpbtmabupuivS_N6E9eBrmzXvz4EfILcIEEeTDopx9TChQPmFMsJzyMzJCIfJUZsDOyQgAi1RSgZfkKsZNv_KCsxH5ejfO2zbUZmt8l8yNCt75ddJLycybsD6kM2td7YbrMpqQlDG2tVOda33ifLIsP9NecrEzq2RqmmbfqJC8mu63Dd_xmlxY1URzc5pjsnycLabP6fzt6WVaztOa8qxLZaG1BMlXGgGYLSyiyBCVBabzAhRYkAZhpWupaUYR87zXFGCuNVJu2ZjcH__uQvuzN7GrNu0--L6yYsDyjAshee_Co6sObYzB2GoX3FaFQ4VQDRirAWM1YKxOGPvM3THjjDH__BykYAX7Axlebh0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3037645594</pqid></control><display><type>article</type><title>Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Kaleem, Zeeshan ; Khalid, Waqas ; Ahmad, Ayaz ; Yu, Heejung ; Almasoud, Abdullah M. ; Yuen, Chau</creator><creatorcontrib>Kaleem, Zeeshan ; Khalid, Waqas ; Ahmad, Ayaz ; Yu, Heejung ; Almasoud, Abdullah M. ; Yuen, Chau</creatorcontrib><description>In unmanned aerial vehicle (UAV)-assisted communications, there are two significant challenges that need to be addressed-optimized UAV placement and energy-efficient user association. These challenges are crucial in meeting the quality-of-service requirements of users. To overcome these challenges, a reinforcement-learning-based intelligent solution is proposed along with a reward function that associates users with UAVs in an intelligent manner. This solution aims to improve the system's sum rate performance by consuming less energy. Simulation results are presented to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed approach is more energy efficient than the benchmark scheme while improving the system's sum rate.</description><identifier>ISSN: 0018-9251</identifier><identifier>EISSN: 1557-9603</identifier><identifier>DOI: 10.1109/TAES.2024.3353724</identifier><identifier>CODEN: IEARAX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Autonomous aerial vehicles ; Base stations ; Cellular communication ; Cellular networks ; Energy efficiency ; Optimization ; Quality of service ; reinforcement learning (RL) ; Resource management ; Simulation ; Unmanned aerial vehicles ; unmanned aerial vehicles (UAVs) ; user association (UA) ; User requirements</subject><ispartof>IEEE transactions on aerospace and electronic systems, 2024-04, Vol.60 (2), p.2474-2481</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c246t-98bb9094db1003f8f115611af03b780a0f09e10dbc9b2621177a0fa017bb124f3</cites><orcidid>0000-0001-8046-2376 ; 0000-0001-8520-7219 ; 0000-0002-9805-970X ; 0000-0002-9307-2120 ; 0000-0002-7163-0443 ; 0000-0002-2253-6004</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10409538$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10409538$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kaleem, Zeeshan</creatorcontrib><creatorcontrib>Khalid, Waqas</creatorcontrib><creatorcontrib>Ahmad, Ayaz</creatorcontrib><creatorcontrib>Yu, Heejung</creatorcontrib><creatorcontrib>Almasoud, Abdullah M.</creatorcontrib><creatorcontrib>Yuen, Chau</creatorcontrib><title>Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks</title><title>IEEE transactions on aerospace and electronic systems</title><addtitle>T-AES</addtitle><description>In unmanned aerial vehicle (UAV)-assisted communications, there are two significant challenges that need to be addressed-optimized UAV placement and energy-efficient user association. These challenges are crucial in meeting the quality-of-service requirements of users. To overcome these challenges, a reinforcement-learning-based intelligent solution is proposed along with a reward function that associates users with UAVs in an intelligent manner. This solution aims to improve the system's sum rate performance by consuming less energy. Simulation results are presented to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed approach is more energy efficient than the benchmark scheme while improving the system's sum rate.</description><subject>Autonomous aerial vehicles</subject><subject>Base stations</subject><subject>Cellular communication</subject><subject>Cellular networks</subject><subject>Energy efficiency</subject><subject>Optimization</subject><subject>Quality of service</subject><subject>reinforcement learning (RL)</subject><subject>Resource management</subject><subject>Simulation</subject><subject>Unmanned aerial vehicles</subject><subject>unmanned aerial vehicles (UAVs)</subject><subject>user association (UA)</subject><subject>User requirements</subject><issn>0018-9251</issn><issn>1557-9603</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1Lw0AQxRdRsFb_AMFDwHPqzH4k2WMo9QOKgrYeDbvpbtmabupuivS_N6E9eBrmzXvz4EfILcIEEeTDopx9TChQPmFMsJzyMzJCIfJUZsDOyQgAi1RSgZfkKsZNv_KCsxH5ejfO2zbUZmt8l8yNCt75ddJLycybsD6kM2td7YbrMpqQlDG2tVOda33ifLIsP9NecrEzq2RqmmbfqJC8mu63Dd_xmlxY1URzc5pjsnycLabP6fzt6WVaztOa8qxLZaG1BMlXGgGYLSyiyBCVBabzAhRYkAZhpWupaUYR87zXFGCuNVJu2ZjcH__uQvuzN7GrNu0--L6yYsDyjAshee_Co6sObYzB2GoX3FaFQ4VQDRirAWM1YKxOGPvM3THjjDH__BykYAX7Axlebh0</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Kaleem, Zeeshan</creator><creator>Khalid, Waqas</creator><creator>Ahmad, Ayaz</creator><creator>Yu, Heejung</creator><creator>Almasoud, Abdullah M.</creator><creator>Yuen, Chau</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-8046-2376</orcidid><orcidid>https://orcid.org/0000-0001-8520-7219</orcidid><orcidid>https://orcid.org/0000-0002-9805-970X</orcidid><orcidid>https://orcid.org/0000-0002-9307-2120</orcidid><orcidid>https://orcid.org/0000-0002-7163-0443</orcidid><orcidid>https://orcid.org/0000-0002-2253-6004</orcidid></search><sort><creationdate>20240401</creationdate><title>Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks</title><author>Kaleem, Zeeshan ; Khalid, Waqas ; Ahmad, Ayaz ; Yu, Heejung ; Almasoud, Abdullah M. ; Yuen, Chau</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c246t-98bb9094db1003f8f115611af03b780a0f09e10dbc9b2621177a0fa017bb124f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Autonomous aerial vehicles</topic><topic>Base stations</topic><topic>Cellular communication</topic><topic>Cellular networks</topic><topic>Energy efficiency</topic><topic>Optimization</topic><topic>Quality of service</topic><topic>reinforcement learning (RL)</topic><topic>Resource management</topic><topic>Simulation</topic><topic>Unmanned aerial vehicles</topic><topic>unmanned aerial vehicles (UAVs)</topic><topic>user association (UA)</topic><topic>User requirements</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kaleem, Zeeshan</creatorcontrib><creatorcontrib>Khalid, Waqas</creatorcontrib><creatorcontrib>Ahmad, Ayaz</creatorcontrib><creatorcontrib>Yu, Heejung</creatorcontrib><creatorcontrib>Almasoud, Abdullah M.</creatorcontrib><creatorcontrib>Yuen, Chau</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on aerospace and electronic systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kaleem, Zeeshan</au><au>Khalid, Waqas</au><au>Ahmad, Ayaz</au><au>Yu, Heejung</au><au>Almasoud, Abdullah M.</au><au>Yuen, Chau</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks</atitle><jtitle>IEEE transactions on aerospace and electronic systems</jtitle><stitle>T-AES</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>60</volume><issue>2</issue><spage>2474</spage><epage>2481</epage><pages>2474-2481</pages><issn>0018-9251</issn><eissn>1557-9603</eissn><coden>IEARAX</coden><abstract>In unmanned aerial vehicle (UAV)-assisted communications, there are two significant challenges that need to be addressed-optimized UAV placement and energy-efficient user association. These challenges are crucial in meeting the quality-of-service requirements of users. To overcome these challenges, a reinforcement-learning-based intelligent solution is proposed along with a reward function that associates users with UAVs in an intelligent manner. This solution aims to improve the system's sum rate performance by consuming less energy. Simulation results are presented to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed approach is more energy efficient than the benchmark scheme while improving the system's sum rate.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TAES.2024.3353724</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0001-8046-2376</orcidid><orcidid>https://orcid.org/0000-0001-8520-7219</orcidid><orcidid>https://orcid.org/0000-0002-9805-970X</orcidid><orcidid>https://orcid.org/0000-0002-9307-2120</orcidid><orcidid>https://orcid.org/0000-0002-7163-0443</orcidid><orcidid>https://orcid.org/0000-0002-2253-6004</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9251
ispartof IEEE transactions on aerospace and electronic systems, 2024-04, Vol.60 (2), p.2474-2481
issn 0018-9251
1557-9603
language eng
recordid cdi_crossref_primary_10_1109_TAES_2024_3353724
source IEEE Electronic Library (IEL)
subjects Autonomous aerial vehicles
Base stations
Cellular communication
Cellular networks
Energy efficiency
Optimization
Quality of service
reinforcement learning (RL)
Resource management
Simulation
Unmanned aerial vehicles
unmanned aerial vehicles (UAVs)
user association (UA)
User requirements
title Reinforcement Learning for Energy-Efficient User Association in UAV-Assisted Cellular Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T23%3A40%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reinforcement%20Learning%20for%20Energy-Efficient%20User%20Association%20in%20UAV-Assisted%20Cellular%20Networks&rft.jtitle=IEEE%20transactions%20on%20aerospace%20and%20electronic%20systems&rft.au=Kaleem,%20Zeeshan&rft.date=2024-04-01&rft.volume=60&rft.issue=2&rft.spage=2474&rft.epage=2481&rft.pages=2474-2481&rft.issn=0018-9251&rft.eissn=1557-9603&rft.coden=IEARAX&rft_id=info:doi/10.1109/TAES.2024.3353724&rft_dat=%3Cproquest_RIE%3E3037645594%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3037645594&rft_id=info:pmid/&rft_ieee_id=10409538&rfr_iscdi=true