Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing
With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning ( F L)-based p roactive content c aching (FPC) can alleviate the matter by placing content...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on parallel and distributed systems 2022-12, Vol.33 (12), p.4767-4782 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4782 |
---|---|
container_issue | 12 |
container_start_page | 4767 |
container_title | IEEE transactions on parallel and distributed systems |
container_volume | 33 |
creator | Qiao, Dewen Guo, Songtao Liu, Defang Long, Saiqin Zhou, Pengzhan Li, Zhetao |
description | With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning ( F L)-based p roactive content c aching (FPC) can alleviate the matter by placing content in local cache to achieve fast and repetitive data access while protecting the users' privacy. However, due to the non-independent and identically distributed (Non-IID) data across the clients and limited edge resources, it is unrealistic for FL to aggregate all participated devices in parallel for model update and adopt the fixed iteration frequency in local training process. To address this issue, we propose a distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption. Through theoretical analysis, we first formulate the FPC problem into a stacked autoencoders (SAE) model loss minimization problem while satisfying resources constraint. We then propose an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision. Next, we show that when training data are Non-IID, aggregating the model parameters of all participated devices may be not an optimal strategy to improve the FL-based content caching efficiency, and it is more meaningful to adopt adaptive local iteration frequency when resources are limited. Finally, experimental results in three real datasets demonstrate that AFPC can effectively improve cache efficiency up to 38.4\% % and 6.84\% % , and save resources up to 47.4\% % and 35.6\% % , respectively, compared with traditional multi-armed bandit (MAB)-based and FL-based algorithms. |
doi_str_mv | 10.1109/TPDS.2022.3201983 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2714893627</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9868114</ieee_id><sourcerecordid>2714893627</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-4ba99a720053e7a7b5086d47892ab4d7984b6f9eb5ef11e9b574994c03bfea613</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE7d2Y_szrG0VoWCRevVZZNMaopN4iYV_Pcmtnia4f2YgYexa-ATAI5369X8dSK4EBMpOKCVJ2wEWttYgJWn_c6VjlEAnrOLtt1yDkpzNWLv09w3XflN0YJyCr6jPJoTNdELlVVRh4x2VHXRknyoymoT9VK0CrXP_jqzuuoGe-azj8Etq-g-3wz6rtl3vXLJzgr_2dLVcY7Z2-J-PXuMl88PT7PpMs4Eyi5WqUf0RnCuJRlvUs1tkitjUfhU5QatSpMCKdVUABCm2ihElXGZFuQTkGN2e7jbhPprT23ntvU-VP1LJwwoizIRpk_BIZWFum0DFa4J5c6HHwfcDRjdgNENGN0RY9-5OXRKIvrPo00sgJK_mp5uJg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2714893627</pqid></control><display><type>article</type><title>Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing</title><source>IEEE Electronic Library (IEL)</source><creator>Qiao, Dewen ; Guo, Songtao ; Liu, Defang ; Long, Saiqin ; Zhou, Pengzhan ; Li, Zhetao</creator><creatorcontrib>Qiao, Dewen ; Guo, Songtao ; Liu, Defang ; Long, Saiqin ; Zhou, Pengzhan ; Li, Zhetao</creatorcontrib><description><![CDATA[With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning ( F L)-based p roactive content c aching (FPC) can alleviate the matter by placing content in local cache to achieve fast and repetitive data access while protecting the users' privacy. However, due to the non-independent and identically distributed (Non-IID) data across the clients and limited edge resources, it is unrealistic for FL to aggregate all participated devices in parallel for model update and adopt the fixed iteration frequency in local training process. To address this issue, we propose a distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption. Through theoretical analysis, we first formulate the FPC problem into a stacked autoencoders (SAE) model loss minimization problem while satisfying resources constraint. We then propose an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision. Next, we show that when training data are Non-IID, aggregating the model parameters of all participated devices may be not an optimal strategy to improve the FL-based content caching efficiency, and it is more meaningful to adopt adaptive local iteration frequency when resources are limited. Finally, experimental results in three real datasets demonstrate that AFPC can effectively improve cache efficiency up to 38.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq1-3201983.gif"/> </inline-formula> and 6.84<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq2-3201983.gif"/> </inline-formula>, and save resources up to 47.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq3-3201983.gif"/> </inline-formula> and 35.6<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq4-3201983.gif"/> </inline-formula>, respectively, compared with traditional multi-armed bandit (MAB)-based and FL-based algorithms.]]></description><identifier>ISSN: 1045-9219</identifier><identifier>EISSN: 1558-2183</identifier><identifier>DOI: 10.1109/TPDS.2022.3201983</identifier><identifier>CODEN: ITDSEO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Adaptive algorithms ; Caching ; Cloud computing ; Content caching ; Data models ; Deep learning ; deep reinforcement learning ; Delays ; Edge computing ; Efficiency ; Feature extraction ; federated learning ; Internet of Things ; Iterative methods ; Machine learning ; Optimization ; Reinforcement learning ; resource constraint ; Servers ; Training</subject><ispartof>IEEE transactions on parallel and distributed systems, 2022-12, Vol.33 (12), p.4767-4782</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-4ba99a720053e7a7b5086d47892ab4d7984b6f9eb5ef11e9b574994c03bfea613</citedby><cites>FETCH-LOGICAL-c293t-4ba99a720053e7a7b5086d47892ab4d7984b6f9eb5ef11e9b574994c03bfea613</cites><orcidid>0000-0001-6741-4871 ; 0000-0002-8796-5969 ; 0000-0002-7804-0286 ; 0000-0001-7119-8673</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9868114$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9868114$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Qiao, Dewen</creatorcontrib><creatorcontrib>Guo, Songtao</creatorcontrib><creatorcontrib>Liu, Defang</creatorcontrib><creatorcontrib>Long, Saiqin</creatorcontrib><creatorcontrib>Zhou, Pengzhan</creatorcontrib><creatorcontrib>Li, Zhetao</creatorcontrib><title>Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing</title><title>IEEE transactions on parallel and distributed systems</title><addtitle>TPDS</addtitle><description><![CDATA[With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning ( F L)-based p roactive content c aching (FPC) can alleviate the matter by placing content in local cache to achieve fast and repetitive data access while protecting the users' privacy. However, due to the non-independent and identically distributed (Non-IID) data across the clients and limited edge resources, it is unrealistic for FL to aggregate all participated devices in parallel for model update and adopt the fixed iteration frequency in local training process. To address this issue, we propose a distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption. Through theoretical analysis, we first formulate the FPC problem into a stacked autoencoders (SAE) model loss minimization problem while satisfying resources constraint. We then propose an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision. Next, we show that when training data are Non-IID, aggregating the model parameters of all participated devices may be not an optimal strategy to improve the FL-based content caching efficiency, and it is more meaningful to adopt adaptive local iteration frequency when resources are limited. Finally, experimental results in three real datasets demonstrate that AFPC can effectively improve cache efficiency up to 38.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq1-3201983.gif"/> </inline-formula> and 6.84<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq2-3201983.gif"/> </inline-formula>, and save resources up to 47.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq3-3201983.gif"/> </inline-formula> and 35.6<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq4-3201983.gif"/> </inline-formula>, respectively, compared with traditional multi-armed bandit (MAB)-based and FL-based algorithms.]]></description><subject>Adaptation models</subject><subject>Adaptive algorithms</subject><subject>Caching</subject><subject>Cloud computing</subject><subject>Content caching</subject><subject>Data models</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>Delays</subject><subject>Edge computing</subject><subject>Efficiency</subject><subject>Feature extraction</subject><subject>federated learning</subject><subject>Internet of Things</subject><subject>Iterative methods</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Reinforcement learning</subject><subject>resource constraint</subject><subject>Servers</subject><subject>Training</subject><issn>1045-9219</issn><issn>1558-2183</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE7d2Y_szrG0VoWCRevVZZNMaopN4iYV_Pcmtnia4f2YgYexa-ATAI5369X8dSK4EBMpOKCVJ2wEWttYgJWn_c6VjlEAnrOLtt1yDkpzNWLv09w3XflN0YJyCr6jPJoTNdELlVVRh4x2VHXRknyoymoT9VK0CrXP_jqzuuoGe-azj8Etq-g-3wz6rtl3vXLJzgr_2dLVcY7Z2-J-PXuMl88PT7PpMs4Eyi5WqUf0RnCuJRlvUs1tkitjUfhU5QatSpMCKdVUABCm2ihElXGZFuQTkGN2e7jbhPprT23ntvU-VP1LJwwoizIRpk_BIZWFum0DFa4J5c6HHwfcDRjdgNENGN0RY9-5OXRKIvrPo00sgJK_mp5uJg</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Qiao, Dewen</creator><creator>Guo, Songtao</creator><creator>Liu, Defang</creator><creator>Long, Saiqin</creator><creator>Zhou, Pengzhan</creator><creator>Li, Zhetao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-6741-4871</orcidid><orcidid>https://orcid.org/0000-0002-8796-5969</orcidid><orcidid>https://orcid.org/0000-0002-7804-0286</orcidid><orcidid>https://orcid.org/0000-0001-7119-8673</orcidid></search><sort><creationdate>20221201</creationdate><title>Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing</title><author>Qiao, Dewen ; Guo, Songtao ; Liu, Defang ; Long, Saiqin ; Zhou, Pengzhan ; Li, Zhetao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-4ba99a720053e7a7b5086d47892ab4d7984b6f9eb5ef11e9b574994c03bfea613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation models</topic><topic>Adaptive algorithms</topic><topic>Caching</topic><topic>Cloud computing</topic><topic>Content caching</topic><topic>Data models</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>Delays</topic><topic>Edge computing</topic><topic>Efficiency</topic><topic>Feature extraction</topic><topic>federated learning</topic><topic>Internet of Things</topic><topic>Iterative methods</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Reinforcement learning</topic><topic>resource constraint</topic><topic>Servers</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qiao, Dewen</creatorcontrib><creatorcontrib>Guo, Songtao</creatorcontrib><creatorcontrib>Liu, Defang</creatorcontrib><creatorcontrib>Long, Saiqin</creatorcontrib><creatorcontrib>Zhou, Pengzhan</creatorcontrib><creatorcontrib>Li, Zhetao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on parallel and distributed systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qiao, Dewen</au><au>Guo, Songtao</au><au>Liu, Defang</au><au>Long, Saiqin</au><au>Zhou, Pengzhan</au><au>Li, Zhetao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing</atitle><jtitle>IEEE transactions on parallel and distributed systems</jtitle><stitle>TPDS</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>33</volume><issue>12</issue><spage>4767</spage><epage>4782</epage><pages>4767-4782</pages><issn>1045-9219</issn><eissn>1558-2183</eissn><coden>ITDSEO</coden><abstract><![CDATA[With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning ( F L)-based p roactive content c aching (FPC) can alleviate the matter by placing content in local cache to achieve fast and repetitive data access while protecting the users' privacy. However, due to the non-independent and identically distributed (Non-IID) data across the clients and limited edge resources, it is unrealistic for FL to aggregate all participated devices in parallel for model update and adopt the fixed iteration frequency in local training process. To address this issue, we propose a distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption. Through theoretical analysis, we first formulate the FPC problem into a stacked autoencoders (SAE) model loss minimization problem while satisfying resources constraint. We then propose an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision. Next, we show that when training data are Non-IID, aggregating the model parameters of all participated devices may be not an optimal strategy to improve the FL-based content caching efficiency, and it is more meaningful to adopt adaptive local iteration frequency when resources are limited. Finally, experimental results in three real datasets demonstrate that AFPC can effectively improve cache efficiency up to 38.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq1-3201983.gif"/> </inline-formula> and 6.84<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq2-3201983.gif"/> </inline-formula>, and save resources up to 47.4<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq3-3201983.gif"/> </inline-formula> and 35.6<inline-formula><tex-math notation="LaTeX">\%</tex-math> <mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="guo-ieq4-3201983.gif"/> </inline-formula>, respectively, compared with traditional multi-armed bandit (MAB)-based and FL-based algorithms.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TPDS.2022.3201983</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-6741-4871</orcidid><orcidid>https://orcid.org/0000-0002-8796-5969</orcidid><orcidid>https://orcid.org/0000-0002-7804-0286</orcidid><orcidid>https://orcid.org/0000-0001-7119-8673</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1045-9219 |
ispartof | IEEE transactions on parallel and distributed systems, 2022-12, Vol.33 (12), p.4767-4782 |
issn | 1045-9219 1558-2183 |
language | eng |
recordid | cdi_proquest_journals_2714893627 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation models Adaptive algorithms Caching Cloud computing Content caching Data models Deep learning deep reinforcement learning Delays Edge computing Efficiency Feature extraction federated learning Internet of Things Iterative methods Machine learning Optimization Reinforcement learning resource constraint Servers Training |
title | Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T14%3A52%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Federated%20Deep%20Reinforcement%20Learning%20for%20Proactive%20Content%20Caching%20in%20Edge%20Computing&rft.jtitle=IEEE%20transactions%20on%20parallel%20and%20distributed%20systems&rft.au=Qiao,%20Dewen&rft.date=2022-12-01&rft.volume=33&rft.issue=12&rft.spage=4767&rft.epage=4782&rft.pages=4767-4782&rft.issn=1045-9219&rft.eissn=1558-2183&rft.coden=ITDSEO&rft_id=info:doi/10.1109/TPDS.2022.3201983&rft_dat=%3Cproquest_RIE%3E2714893627%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2714893627&rft_id=info:pmid/&rft_ieee_id=9868114&rfr_iscdi=true |