Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning
In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the n...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020-01, Vol.8, p.1-1 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE access |
container_volume | 8 |
creator | Abiko, Yu Saito, Takato Ikeda, Daizo Ohta, Ken Mizuno, Tadanori Mineno, Hiroshi |
description | In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%. |
doi_str_mv | 10.1109/ACCESS.2020.2986050 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2453702407</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9057705</ieee_id><doaj_id>oai_doaj_org_article_eb43642392fe4792bbe492b42afe54c1</doaj_id><sourcerecordid>2453702407</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-598643ea9c279956778d3f70cb171e17dfcf02f1b81b0c9c6d53d39c6bf7da5d3</originalsourceid><addsrcrecordid>eNpNUctOwzAQjBBIVKVf0Islzi1-xvWxlKdUQKJwthxnjVzSuNipgL_HJRXCh9317sz4MUUxJnhKCFYX88XierWaUkzxlKpZiQU-KgaUlGrCBCuP_9WnxSilNc5rlltCDoqvmwa-fNUAeoYUdtECumyCfUfzJifT-dCiLqCHXdP5bUatGm8hIRcieja1D2hu8z6hR-g-Q3z_nfv2Db2mfbwC2GZh32a8hQ20HVqCiW2enRUnzjQJRoc8LF5vrl8Wd5Pl0-39Yr6cWC55NxH5QZyBUZZKpUQp5axmTmJbEUmAyNpZh6kj1YxU2Cpb1oLVLOfKydqImg2L-163Dmatt9FvTPzWwXj92wjxTZvYeduAhoqzklOmqAMuFa0q4DlyahwIbknWOu-1tjF87CB1ep2_rM3X15QLJjHlWGYU61E2hpQiuL9TCdZ7x3TvmN47pg-OZda4Z3kA-GMoLKTEgv0ATl2Sew</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2453702407</pqid></control><display><type>article</type><title>Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Abiko, Yu ; Saito, Takato ; Ikeda, Daizo ; Ohta, Ken ; Mizuno, Tadanori ; Mineno, Hiroshi</creator><creatorcontrib>Abiko, Yu ; Saito, Takato ; Ikeda, Daizo ; Ohta, Ken ; Mizuno, Tadanori ; Mineno, Hiroshi</creatorcontrib><description>In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2020.2986050</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>5G mobile communication ; Deep learning ; Deep reinforcement learning ; Egress ; Environment models ; Machine learning ; Mobile communication systems ; Network slicing ; Optimization ; Radio access networks ; RAN slicing ; Resource allocation ; Resource management ; Scalability ; Wireless networks</subject><ispartof>IEEE access, 2020-01, Vol.8, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-598643ea9c279956778d3f70cb171e17dfcf02f1b81b0c9c6d53d39c6bf7da5d3</citedby><cites>FETCH-LOGICAL-c474t-598643ea9c279956778d3f70cb171e17dfcf02f1b81b0c9c6d53d39c6bf7da5d3</cites><orcidid>0000-0002-7243-2270 ; 0000-0002-4022-0505 ; 0000-0002-3921-4298 ; 0000-0002-7165-7531 ; 0000-0002-9831-4758 ; 0000-0003-2360-9874</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9057705$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,27610,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Abiko, Yu</creatorcontrib><creatorcontrib>Saito, Takato</creatorcontrib><creatorcontrib>Ikeda, Daizo</creatorcontrib><creatorcontrib>Ohta, Ken</creatorcontrib><creatorcontrib>Mizuno, Tadanori</creatorcontrib><creatorcontrib>Mineno, Hiroshi</creatorcontrib><title>Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%.</description><subject>5G mobile communication</subject><subject>Deep learning</subject><subject>Deep reinforcement learning</subject><subject>Egress</subject><subject>Environment models</subject><subject>Machine learning</subject><subject>Mobile communication systems</subject><subject>Network slicing</subject><subject>Optimization</subject><subject>Radio access networks</subject><subject>RAN slicing</subject><subject>Resource allocation</subject><subject>Resource management</subject><subject>Scalability</subject><subject>Wireless networks</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUctOwzAQjBBIVKVf0Islzi1-xvWxlKdUQKJwthxnjVzSuNipgL_HJRXCh9317sz4MUUxJnhKCFYX88XierWaUkzxlKpZiQU-KgaUlGrCBCuP_9WnxSilNc5rlltCDoqvmwa-fNUAeoYUdtECumyCfUfzJifT-dCiLqCHXdP5bUatGm8hIRcieja1D2hu8z6hR-g-Q3z_nfv2Db2mfbwC2GZh32a8hQ20HVqCiW2enRUnzjQJRoc8LF5vrl8Wd5Pl0-39Yr6cWC55NxH5QZyBUZZKpUQp5axmTmJbEUmAyNpZh6kj1YxU2Cpb1oLVLOfKydqImg2L-163Dmatt9FvTPzWwXj92wjxTZvYeduAhoqzklOmqAMuFa0q4DlyahwIbknWOu-1tjF87CB1ep2_rM3X15QLJjHlWGYU61E2hpQiuL9TCdZ7x3TvmN47pg-OZda4Z3kA-GMoLKTEgv0ATl2Sew</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Abiko, Yu</creator><creator>Saito, Takato</creator><creator>Ikeda, Daizo</creator><creator>Ohta, Ken</creator><creator>Mizuno, Tadanori</creator><creator>Mineno, Hiroshi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-7243-2270</orcidid><orcidid>https://orcid.org/0000-0002-4022-0505</orcidid><orcidid>https://orcid.org/0000-0002-3921-4298</orcidid><orcidid>https://orcid.org/0000-0002-7165-7531</orcidid><orcidid>https://orcid.org/0000-0002-9831-4758</orcidid><orcidid>https://orcid.org/0000-0003-2360-9874</orcidid></search><sort><creationdate>20200101</creationdate><title>Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning</title><author>Abiko, Yu ; Saito, Takato ; Ikeda, Daizo ; Ohta, Ken ; Mizuno, Tadanori ; Mineno, Hiroshi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-598643ea9c279956778d3f70cb171e17dfcf02f1b81b0c9c6d53d39c6bf7da5d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>5G mobile communication</topic><topic>Deep learning</topic><topic>Deep reinforcement learning</topic><topic>Egress</topic><topic>Environment models</topic><topic>Machine learning</topic><topic>Mobile communication systems</topic><topic>Network slicing</topic><topic>Optimization</topic><topic>Radio access networks</topic><topic>RAN slicing</topic><topic>Resource allocation</topic><topic>Resource management</topic><topic>Scalability</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Abiko, Yu</creatorcontrib><creatorcontrib>Saito, Takato</creatorcontrib><creatorcontrib>Ikeda, Daizo</creatorcontrib><creatorcontrib>Ohta, Ken</creatorcontrib><creatorcontrib>Mizuno, Tadanori</creatorcontrib><creatorcontrib>Mineno, Hiroshi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Abiko, Yu</au><au>Saito, Takato</au><au>Ikeda, Daizo</au><au>Ohta, Ken</au><au>Mizuno, Tadanori</au><au>Mineno, Hiroshi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2020-01-01</date><risdate>2020</risdate><volume>8</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2020.2986050</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-7243-2270</orcidid><orcidid>https://orcid.org/0000-0002-4022-0505</orcidid><orcidid>https://orcid.org/0000-0002-3921-4298</orcidid><orcidid>https://orcid.org/0000-0002-7165-7531</orcidid><orcidid>https://orcid.org/0000-0002-9831-4758</orcidid><orcidid>https://orcid.org/0000-0003-2360-9874</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2020-01, Vol.8, p.1-1 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_proquest_journals_2453702407 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | 5G mobile communication Deep learning Deep reinforcement learning Egress Environment models Machine learning Mobile communication systems Network slicing Optimization Radio access networks RAN slicing Resource allocation Resource management Scalability Wireless networks |
title | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T18%3A35%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Flexible%20Resource%20Block%20Allocation%20to%20Multiple%20Slices%20for%20Radio%20Access%20Network%20Slicing%20Using%20Deep%20Reinforcement%20Learning&rft.jtitle=IEEE%20access&rft.au=Abiko,%20Yu&rft.date=2020-01-01&rft.volume=8&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2020.2986050&rft_dat=%3Cproquest_ieee_%3E2453702407%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2453702407&rft_id=info:pmid/&rft_ieee_id=9057705&rft_doaj_id=oai_doaj_org_article_eb43642392fe4792bbe492b42afe54c1&rfr_iscdi=true |