Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks

Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given networ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2019-04, Vol.6 (2), p.1960-1971
Hauptverfasser: Sun, Yaohua, Peng, Mugen, Mao, Shiwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1971
container_issue 2
container_start_page 1960
container_title IEEE internet of things journal
container_volume 6
creator Sun, Yaohua
Peng, Mugen
Mao, Shiwen
description Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors' on-off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.
doi_str_mv 10.1109/JIOT.2018.2871020
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8468000</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8468000</ieee_id><sourcerecordid>2222206141</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-f4affad5189407bf0510fc7941f6f07441bcb89824dcff41a658d8223c6749773</originalsourceid><addsrcrecordid>eNpNkMFOAjEUABujiQT5AOOliefFtlu63SOiIAYkQTw3pftKFqHFdonx7y1ZYuzl9TDTvgxCt5T0KSXlw-t0seozQmWfyYISRi5Qh-WsyLgQ7PLf_Rr1YtwSQpI2oKXooN0TwAEvoXbWBwN7cA2egQ6udpvsUUeo8NxXgN9hB6apvcPaVYmP_phwPNdOb1or-XgSABwe-w1e6qr2eGgMxIjfoPn24TPeoCurdxF659lFH-Pn1eglmy0m09Fwlpk8F01mubZWVwMqS06KtU2rEmuKklMrLCk4p2uzlqVkvDLWcqrFQFaSsdyIgpdFkXfRffvuIfivI8RGbdO6Ln2p2OkQQTlNFG0pE3yMAaw6hHqvw4-iRJ26qlNXdeqqzl2Tc9c6NQD88ZILmaLmv0JAcxs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2222206141</pqid></control><display><type>article</type><title>Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Yaohua ; Peng, Mugen ; Mao, Shiwen</creator><creatorcontrib>Sun, Yaohua ; Peng, Mugen ; Mao, Shiwen</creatorcontrib><description>Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors' on-off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2018.2871020</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial intelligence ; Caching ; Cloud computing ; communication mode selection ; Communications systems ; Computer simulation ; Deep learning ; deep reinforcement learning (DRL) ; Device-to-device communication ; Edge computing ; Fog ; fog radio access networks (F-RANs) ; Heuristic algorithms ; Internet of Things ; Machine learning ; Microprocessors ; Modal choice ; Power consumption ; Power demand ; Program processors ; Radio ; Resource management ; Support services</subject><ispartof>IEEE internet of things journal, 2019-04, Vol.6 (2), p.1960-1971</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-f4affad5189407bf0510fc7941f6f07441bcb89824dcff41a658d8223c6749773</citedby><cites>FETCH-LOGICAL-c336t-f4affad5189407bf0510fc7941f6f07441bcb89824dcff41a658d8223c6749773</cites><orcidid>0000-0002-8200-5010 ; 0000-0002-7052-0007 ; 0000-0002-4755-7231</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8468000$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8468000$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sun, Yaohua</creatorcontrib><creatorcontrib>Peng, Mugen</creatorcontrib><creatorcontrib>Mao, Shiwen</creatorcontrib><title>Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors' on-off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.</description><subject>Artificial intelligence</subject><subject>Caching</subject><subject>Cloud computing</subject><subject>communication mode selection</subject><subject>Communications systems</subject><subject>Computer simulation</subject><subject>Deep learning</subject><subject>deep reinforcement learning (DRL)</subject><subject>Device-to-device communication</subject><subject>Edge computing</subject><subject>Fog</subject><subject>fog radio access networks (F-RANs)</subject><subject>Heuristic algorithms</subject><subject>Internet of Things</subject><subject>Machine learning</subject><subject>Microprocessors</subject><subject>Modal choice</subject><subject>Power consumption</subject><subject>Power demand</subject><subject>Program processors</subject><subject>Radio</subject><subject>Resource management</subject><subject>Support services</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMFOAjEUABujiQT5AOOliefFtlu63SOiIAYkQTw3pftKFqHFdonx7y1ZYuzl9TDTvgxCt5T0KSXlw-t0seozQmWfyYISRi5Qh-WsyLgQ7PLf_Rr1YtwSQpI2oKXooN0TwAEvoXbWBwN7cA2egQ6udpvsUUeo8NxXgN9hB6apvcPaVYmP_phwPNdOb1or-XgSABwe-w1e6qr2eGgMxIjfoPn24TPeoCurdxF659lFH-Pn1eglmy0m09Fwlpk8F01mubZWVwMqS06KtU2rEmuKklMrLCk4p2uzlqVkvDLWcqrFQFaSsdyIgpdFkXfRffvuIfivI8RGbdO6Ln2p2OkQQTlNFG0pE3yMAaw6hHqvw4-iRJ26qlNXdeqqzl2Tc9c6NQD88ZILmaLmv0JAcxs</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Sun, Yaohua</creator><creator>Peng, Mugen</creator><creator>Mao, Shiwen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8200-5010</orcidid><orcidid>https://orcid.org/0000-0002-7052-0007</orcidid><orcidid>https://orcid.org/0000-0002-4755-7231</orcidid></search><sort><creationdate>20190401</creationdate><title>Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks</title><author>Sun, Yaohua ; Peng, Mugen ; Mao, Shiwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-f4affad5189407bf0510fc7941f6f07441bcb89824dcff41a658d8223c6749773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial intelligence</topic><topic>Caching</topic><topic>Cloud computing</topic><topic>communication mode selection</topic><topic>Communications systems</topic><topic>Computer simulation</topic><topic>Deep learning</topic><topic>deep reinforcement learning (DRL)</topic><topic>Device-to-device communication</topic><topic>Edge computing</topic><topic>Fog</topic><topic>fog radio access networks (F-RANs)</topic><topic>Heuristic algorithms</topic><topic>Internet of Things</topic><topic>Machine learning</topic><topic>Microprocessors</topic><topic>Modal choice</topic><topic>Power consumption</topic><topic>Power demand</topic><topic>Program processors</topic><topic>Radio</topic><topic>Resource management</topic><topic>Support services</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Yaohua</creatorcontrib><creatorcontrib>Peng, Mugen</creatorcontrib><creatorcontrib>Mao, Shiwen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Yaohua</au><au>Peng, Mugen</au><au>Mao, Shiwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>6</volume><issue>2</issue><spage>1960</spage><epage>1971</epage><pages>1960-1971</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Fog radio access networks (F-RANs) are seen as potential architectures to support services of Internet of Things by leveraging edge caching and edge computing. However, current works studying resource management in F-RANs mainly consider a static system with only one communication mode. Given network dynamics, resource diversity, and the coupling of resource management with mode selection, resource management in F-RANs becomes very challenging. Motivated by the recent development of artificial intelligence, a deep reinforcement learning (DRL)-based joint mode selection and resource management approach is proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode or in device-to-device mode, and the resource managed includes both radio resource and computing resource. The core idea is that the network controller makes intelligent decisions on UE communication modes and processors' on-off states with precoding for UEs in C-RAN mode optimized subsequently, aiming at minimizing long-term system power consumption under the dynamics of edge cache states. By simulations, the impacts of several parameters, such as learning rate and edge caching service capability, on system performance are demonstrated, and meanwhile the proposal is compared with other different schemes to show its effectiveness. Moreover, transfer learning is integrated with DRL to accelerate learning process.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JIOT.2018.2871020</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-8200-5010</orcidid><orcidid>https://orcid.org/0000-0002-7052-0007</orcidid><orcidid>https://orcid.org/0000-0002-4755-7231</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2327-4662
ispartof IEEE internet of things journal, 2019-04, Vol.6 (2), p.1960-1971
issn 2327-4662
2327-4662
language eng
recordid cdi_ieee_primary_8468000
source IEEE Electronic Library (IEL)
subjects Artificial intelligence
Caching
Cloud computing
communication mode selection
Communications systems
Computer simulation
Deep learning
deep reinforcement learning (DRL)
Device-to-device communication
Edge computing
Fog
fog radio access networks (F-RANs)
Heuristic algorithms
Internet of Things
Machine learning
Microprocessors
Modal choice
Power consumption
Power demand
Program processors
Radio
Resource management
Support services
title Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T03%3A26%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Reinforcement%20Learning-Based%20Mode%20Selection%20and%20Resource%20Management%20for%20Green%20Fog%20Radio%20Access%20Networks&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=Sun,%20Yaohua&rft.date=2019-04-01&rft.volume=6&rft.issue=2&rft.spage=1960&rft.epage=1971&rft.pages=1960-1971&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2018.2871020&rft_dat=%3Cproquest_RIE%3E2222206141%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2222206141&rft_id=info:pmid/&rft_ieee_id=8468000&rfr_iscdi=true