A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks
This paper studies the problem of joint edge cache placement and content delivery in cache-enabled small cell networks in the presence of spatio-temporal content dynamics unknown a priori . The small base stations (SBSs) satisfy users' content requests either directly from their local caches, o...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on communications 2020-06, Vol.68 (6), p.3627-3639 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3639 |
---|---|
container_issue | 6 |
container_start_page | 3627 |
container_title | IEEE transactions on communications |
container_volume | 68 |
creator | Zhang, Xinruo Zheng, Gan Lambotharan, Sangarapillai Nakhai, Mohammad Reza Wong, Kai-Kit |
description | This paper studies the problem of joint edge cache placement and content delivery in cache-enabled small cell networks in the presence of spatio-temporal content dynamics unknown a priori . The small base stations (SBSs) satisfy users' content requests either directly from their local caches, or by retrieving from other SBSs' caches or from the content server. In contrast to previous approaches that assume a static content library at the server, this paper considers a more realistic non-stationary content library, where new contents may emerge over time at different locations. To keep track of spatio-temporal content dynamics, we propose that the new contents cached at users can be exploited by the SBSs to timely update their flexible cache memories in addition to their routine off-peak main cache updates from the content server. To take into account the variations in traffic demands as well as the limited caching space at the SBSs, a user-assisted caching strategy is proposed based on reinforcement learning principles to progressively optimize the caching policy with the target of maximizing the weighted network utility in the long run. Simulation results verify the superior performance of the proposed caching strategy against various benchmark designs. |
doi_str_mv | 10.1109/TCOMM.2020.2977895 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9020168</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9020168</ieee_id><sourcerecordid>2414535603</sourcerecordid><originalsourceid>FETCH-LOGICAL-c339t-568adebfcc3641d991bd7ef593079fc263d8f538298b8ac722474fe12ecea9913</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EEqXwA7CxxDrFjzixlyU8pZZKtF1HjjMpLq1T7FSof4_7EJsZje69M6OD0C0lA0qJepgVk_F4wAgjA6byXCpxhnpUCJkQKfJz1CNEkSSLyiW6CmFJCEkJ5z20GeJPsK5pvYE1uA6PQHtn3SJ51AFqPA_gk2EINnRxKrT5ihqedl53sNjhmMNPO6fX1uCidd1hg6289jtsHZ6u9WqFC4jlA7rf1n-Ha3TR6FWAm1Pvo_nL86x4S0aT1_diOEoM56pLRCZ1DVVjDM9SWitFqzqHRihOctUYlvFaNoJLpmQltckZS_O0AcrAgI5u3kf3x70b3_5sIXTlst16F0-WLKWp4CKLAPqIHV3GtyF4aMqNt-v4fUlJuSdbHsiWe7LliWwM3R1DFgD-AypaaCb5Hwl2dZc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2414535603</pqid></control><display><type>article</type><title>A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Xinruo ; Zheng, Gan ; Lambotharan, Sangarapillai ; Nakhai, Mohammad Reza ; Wong, Kai-Kit</creator><creatorcontrib>Zhang, Xinruo ; Zheng, Gan ; Lambotharan, Sangarapillai ; Nakhai, Mohammad Reza ; Wong, Kai-Kit</creatorcontrib><description>This paper studies the problem of joint edge cache placement and content delivery in cache-enabled small cell networks in the presence of spatio-temporal content dynamics unknown a priori . The small base stations (SBSs) satisfy users' content requests either directly from their local caches, or by retrieving from other SBSs' caches or from the content server. In contrast to previous approaches that assume a static content library at the server, this paper considers a more realistic non-stationary content library, where new contents may emerge over time at different locations. To keep track of spatio-temporal content dynamics, we propose that the new contents cached at users can be exploited by the SBSs to timely update their flexible cache memories in addition to their routine off-peak main cache updates from the content server. To take into account the variations in traffic demands as well as the limited caching space at the SBSs, a user-assisted caching strategy is proposed based on reinforcement learning principles to progressively optimize the caching policy with the target of maximizing the weighted network utility in the long run. Simulation results verify the superior performance of the proposed caching strategy against various benchmark designs.</description><identifier>ISSN: 0090-6778</identifier><identifier>EISSN: 1558-0857</identifier><identifier>DOI: 10.1109/TCOMM.2020.2977895</identifier><identifier>CODEN: IECMBT</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>cache placement ; Caching ; content delivery ; dynamic content library ; Gallium nitride ; Heuristic algorithms ; Indexes ; Learning ; Libraries ; Microcell networks ; Non-stationary bandit ; Optimization ; Servers ; Strategy ; time-varying popularity ; Upgrading ; User satisfaction</subject><ispartof>IEEE transactions on communications, 2020-06, Vol.68 (6), p.3627-3639</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c339t-568adebfcc3641d991bd7ef593079fc263d8f538298b8ac722474fe12ecea9913</citedby><cites>FETCH-LOGICAL-c339t-568adebfcc3641d991bd7ef593079fc263d8f538298b8ac722474fe12ecea9913</cites><orcidid>0000-0001-8457-6477 ; 0000-0001-5255-7036 ; 0000-0001-7521-0078 ; 0000-0001-6718-8448 ; 0000-0002-1623-1749</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9020168$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9020168$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Xinruo</creatorcontrib><creatorcontrib>Zheng, Gan</creatorcontrib><creatorcontrib>Lambotharan, Sangarapillai</creatorcontrib><creatorcontrib>Nakhai, Mohammad Reza</creatorcontrib><creatorcontrib>Wong, Kai-Kit</creatorcontrib><title>A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks</title><title>IEEE transactions on communications</title><addtitle>TCOMM</addtitle><description>This paper studies the problem of joint edge cache placement and content delivery in cache-enabled small cell networks in the presence of spatio-temporal content dynamics unknown a priori . The small base stations (SBSs) satisfy users' content requests either directly from their local caches, or by retrieving from other SBSs' caches or from the content server. In contrast to previous approaches that assume a static content library at the server, this paper considers a more realistic non-stationary content library, where new contents may emerge over time at different locations. To keep track of spatio-temporal content dynamics, we propose that the new contents cached at users can be exploited by the SBSs to timely update their flexible cache memories in addition to their routine off-peak main cache updates from the content server. To take into account the variations in traffic demands as well as the limited caching space at the SBSs, a user-assisted caching strategy is proposed based on reinforcement learning principles to progressively optimize the caching policy with the target of maximizing the weighted network utility in the long run. Simulation results verify the superior performance of the proposed caching strategy against various benchmark designs.</description><subject>cache placement</subject><subject>Caching</subject><subject>content delivery</subject><subject>dynamic content library</subject><subject>Gallium nitride</subject><subject>Heuristic algorithms</subject><subject>Indexes</subject><subject>Learning</subject><subject>Libraries</subject><subject>Microcell networks</subject><subject>Non-stationary bandit</subject><subject>Optimization</subject><subject>Servers</subject><subject>Strategy</subject><subject>time-varying popularity</subject><subject>Upgrading</subject><subject>User satisfaction</subject><issn>0090-6778</issn><issn>1558-0857</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRS0EEqXwA7CxxDrFjzixlyU8pZZKtF1HjjMpLq1T7FSof4_7EJsZje69M6OD0C0lA0qJepgVk_F4wAgjA6byXCpxhnpUCJkQKfJz1CNEkSSLyiW6CmFJCEkJ5z20GeJPsK5pvYE1uA6PQHtn3SJ51AFqPA_gk2EINnRxKrT5ihqedl53sNjhmMNPO6fX1uCidd1hg6289jtsHZ6u9WqFC4jlA7rf1n-Ha3TR6FWAm1Pvo_nL86x4S0aT1_diOEoM56pLRCZ1DVVjDM9SWitFqzqHRihOctUYlvFaNoJLpmQltckZS_O0AcrAgI5u3kf3x70b3_5sIXTlst16F0-WLKWp4CKLAPqIHV3GtyF4aMqNt-v4fUlJuSdbHsiWe7LliWwM3R1DFgD-AypaaCb5Hwl2dZc</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>Zhang, Xinruo</creator><creator>Zheng, Gan</creator><creator>Lambotharan, Sangarapillai</creator><creator>Nakhai, Mohammad Reza</creator><creator>Wong, Kai-Kit</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-8457-6477</orcidid><orcidid>https://orcid.org/0000-0001-5255-7036</orcidid><orcidid>https://orcid.org/0000-0001-7521-0078</orcidid><orcidid>https://orcid.org/0000-0001-6718-8448</orcidid><orcidid>https://orcid.org/0000-0002-1623-1749</orcidid></search><sort><creationdate>20200601</creationdate><title>A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks</title><author>Zhang, Xinruo ; Zheng, Gan ; Lambotharan, Sangarapillai ; Nakhai, Mohammad Reza ; Wong, Kai-Kit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c339t-568adebfcc3641d991bd7ef593079fc263d8f538298b8ac722474fe12ecea9913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>cache placement</topic><topic>Caching</topic><topic>content delivery</topic><topic>dynamic content library</topic><topic>Gallium nitride</topic><topic>Heuristic algorithms</topic><topic>Indexes</topic><topic>Learning</topic><topic>Libraries</topic><topic>Microcell networks</topic><topic>Non-stationary bandit</topic><topic>Optimization</topic><topic>Servers</topic><topic>Strategy</topic><topic>time-varying popularity</topic><topic>Upgrading</topic><topic>User satisfaction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Xinruo</creatorcontrib><creatorcontrib>Zheng, Gan</creatorcontrib><creatorcontrib>Lambotharan, Sangarapillai</creatorcontrib><creatorcontrib>Nakhai, Mohammad Reza</creatorcontrib><creatorcontrib>Wong, Kai-Kit</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Xinruo</au><au>Zheng, Gan</au><au>Lambotharan, Sangarapillai</au><au>Nakhai, Mohammad Reza</au><au>Wong, Kai-Kit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks</atitle><jtitle>IEEE transactions on communications</jtitle><stitle>TCOMM</stitle><date>2020-06-01</date><risdate>2020</risdate><volume>68</volume><issue>6</issue><spage>3627</spage><epage>3639</epage><pages>3627-3639</pages><issn>0090-6778</issn><eissn>1558-0857</eissn><coden>IECMBT</coden><abstract>This paper studies the problem of joint edge cache placement and content delivery in cache-enabled small cell networks in the presence of spatio-temporal content dynamics unknown a priori . The small base stations (SBSs) satisfy users' content requests either directly from their local caches, or by retrieving from other SBSs' caches or from the content server. In contrast to previous approaches that assume a static content library at the server, this paper considers a more realistic non-stationary content library, where new contents may emerge over time at different locations. To keep track of spatio-temporal content dynamics, we propose that the new contents cached at users can be exploited by the SBSs to timely update their flexible cache memories in addition to their routine off-peak main cache updates from the content server. To take into account the variations in traffic demands as well as the limited caching space at the SBSs, a user-assisted caching strategy is proposed based on reinforcement learning principles to progressively optimize the caching policy with the target of maximizing the weighted network utility in the long run. Simulation results verify the superior performance of the proposed caching strategy against various benchmark designs.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCOMM.2020.2977895</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-8457-6477</orcidid><orcidid>https://orcid.org/0000-0001-5255-7036</orcidid><orcidid>https://orcid.org/0000-0001-7521-0078</orcidid><orcidid>https://orcid.org/0000-0001-6718-8448</orcidid><orcidid>https://orcid.org/0000-0002-1623-1749</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0090-6778 |
ispartof | IEEE transactions on communications, 2020-06, Vol.68 (6), p.3627-3639 |
issn | 0090-6778 1558-0857 |
language | eng |
recordid | cdi_ieee_primary_9020168 |
source | IEEE Electronic Library (IEL) |
subjects | cache placement Caching content delivery dynamic content library Gallium nitride Heuristic algorithms Indexes Learning Libraries Microcell networks Non-stationary bandit Optimization Servers Strategy time-varying popularity Upgrading User satisfaction |
title | A Reinforcement Learning-Based User-Assisted Caching Strategy for Dynamic Content Library in Small Cell Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T23%3A51%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Reinforcement%20Learning-Based%20User-Assisted%20Caching%20Strategy%20for%20Dynamic%20Content%20Library%20in%20Small%20Cell%20Networks&rft.jtitle=IEEE%20transactions%20on%20communications&rft.au=Zhang,%20Xinruo&rft.date=2020-06-01&rft.volume=68&rft.issue=6&rft.spage=3627&rft.epage=3639&rft.pages=3627-3639&rft.issn=0090-6778&rft.eissn=1558-0857&rft.coden=IECMBT&rft_id=info:doi/10.1109/TCOMM.2020.2977895&rft_dat=%3Cproquest_RIE%3E2414535603%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2414535603&rft_id=info:pmid/&rft_ieee_id=9020168&rfr_iscdi=true |