Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks
Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e.,...
Gespeichert in:
Veröffentlicht in: | Soft computing (Berlin, Germany) Germany), 2023-12, Vol.27 (24), p.19359-19375 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 19375 |
---|---|
container_issue | 24 |
container_start_page | 19359 |
container_title | Soft computing (Berlin, Germany) |
container_volume | 27 |
creator | Lan, Dan Shin, Incheol |
description | Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery. |
doi_str_mv | 10.1007/s00500-023-09354-8 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918001223</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918001223</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-3db3378abb2f479fb309a356ad3e2320d93a89543c1b90e361afc19d043cdb393</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqXwApwscTasvUkTc4OKP6kSFzhbTuIUl8RpbVeIt8dpkLhx2tXsfGN5CLnkcM0BipsAkAMwEMhAYp6x8ojMeIbIiqyQx4ddsGKR4Sk5C2EDIHiR44zs7ochROvW1DoWvXbBRmpcND5q6_q03dLGmC31xrp28LUZNdoZ7d1IJSmR0XSdXY-Hft9F25vGalrr-mMKptU-UGfi1-A_wzk5aXUXzMXvnJP3x4e35TNbvT69LO9WrBaZjAybCrEodVWJNn2hrRCkxnyhGzQCBTQSdSnzDGteSTC44LqtuWwgKQmVOCdXU-7WD7u9CVFthr136UklJC8BuBCYXGJy1X4IwZtWbb3ttf9WHNRYrZqqValadahWlQnCCQrJ7NbG_0X_Q_0As-V-SA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918001223</pqid></control><display><type>article</type><title>Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks</title><source>Springer Nature - Complete Springer Journals</source><source>ProQuest Central</source><creator>Lan, Dan ; Shin, Incheol</creator><creatorcontrib>Lan, Dan ; Shin, Incheol</creatorcontrib><description>Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery.</description><identifier>ISSN: 1432-7643</identifier><identifier>EISSN: 1433-7479</identifier><identifier>DOI: 10.1007/s00500-023-09354-8</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Adaptation ; Algorithms ; Artificial Intelligence ; Artificial neural networks ; Automation ; Computational Intelligence ; Computer simulation ; Control ; Decision making ; Deep learning ; Efficiency ; Engineering ; Feedback ; Intelligent agents ; Machine learning ; Mathematical Logic and Foundations ; Mechatronics ; Multimedia ; Neural Networks ; Optimization ; Optimization techniques ; Robotics ; Stability analysis</subject><ispartof>Soft computing (Berlin, Germany), 2023-12, Vol.27 (24), p.19359-19375</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-3db3378abb2f479fb309a356ad3e2320d93a89543c1b90e361afc19d043cdb393</citedby><cites>FETCH-LOGICAL-c249t-3db3378abb2f479fb309a356ad3e2320d93a89543c1b90e361afc19d043cdb393</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00500-023-09354-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918001223?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,21368,27903,27904,33723,41467,42536,43784,51297</link.rule.ids></links><search><creatorcontrib>Lan, Dan</creatorcontrib><creatorcontrib>Shin, Incheol</creatorcontrib><title>Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks</title><title>Soft computing (Berlin, Germany)</title><addtitle>Soft Comput</addtitle><description>Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery.</description><subject>Adaptation</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Computational Intelligence</subject><subject>Computer simulation</subject><subject>Control</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Efficiency</subject><subject>Engineering</subject><subject>Feedback</subject><subject>Intelligent agents</subject><subject>Machine learning</subject><subject>Mathematical Logic and Foundations</subject><subject>Mechatronics</subject><subject>Multimedia</subject><subject>Neural Networks</subject><subject>Optimization</subject><subject>Optimization techniques</subject><subject>Robotics</subject><subject>Stability analysis</subject><issn>1432-7643</issn><issn>1433-7479</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kM1OwzAQhC0EEqXwApwscTasvUkTc4OKP6kSFzhbTuIUl8RpbVeIt8dpkLhx2tXsfGN5CLnkcM0BipsAkAMwEMhAYp6x8ojMeIbIiqyQx4ddsGKR4Sk5C2EDIHiR44zs7ochROvW1DoWvXbBRmpcND5q6_q03dLGmC31xrp28LUZNdoZ7d1IJSmR0XSdXY-Hft9F25vGalrr-mMKptU-UGfi1-A_wzk5aXUXzMXvnJP3x4e35TNbvT69LO9WrBaZjAybCrEodVWJNn2hrRCkxnyhGzQCBTQSdSnzDGteSTC44LqtuWwgKQmVOCdXU-7WD7u9CVFthr136UklJC8BuBCYXGJy1X4IwZtWbb3ttf9WHNRYrZqqValadahWlQnCCQrJ7NbG_0X_Q_0As-V-SA</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Lan, Dan</creator><creator>Shin, Incheol</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope></search><sort><creationdate>20231201</creationdate><title>Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks</title><author>Lan, Dan ; Shin, Incheol</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-3db3378abb2f479fb309a356ad3e2320d93a89543c1b90e361afc19d043cdb393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Computational Intelligence</topic><topic>Computer simulation</topic><topic>Control</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Efficiency</topic><topic>Engineering</topic><topic>Feedback</topic><topic>Intelligent agents</topic><topic>Machine learning</topic><topic>Mathematical Logic and Foundations</topic><topic>Mechatronics</topic><topic>Multimedia</topic><topic>Neural Networks</topic><topic>Optimization</topic><topic>Optimization techniques</topic><topic>Robotics</topic><topic>Stability analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lan, Dan</creatorcontrib><creatorcontrib>Shin, Incheol</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Soft computing (Berlin, Germany)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lan, Dan</au><au>Shin, Incheol</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks</atitle><jtitle>Soft computing (Berlin, Germany)</jtitle><stitle>Soft Comput</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>27</volume><issue>24</issue><spage>19359</spage><epage>19375</epage><pages>19359-19375</pages><issn>1432-7643</issn><eissn>1433-7479</eissn><abstract>Multimedia content delivery in advanced networks faces exponential growth in data volumes, rendering existing solutions obsolete. This research investigates deep reinforcement learning (DRL) for autonomous optimization without extensive datasets. The work analyzes two prominent DRL algorithms, i.e., Dueling Deep Q-Network (DDQN) and Deep Q-Network (DQN) for multimedia delivery in simulated bus networks. DDQN utilizes a novel “dueling” architecture to estimate state value and action advantages, accelerating learning separately. DQN employs deep neural networks to approximate optimal policies. The environment simulates urban buses with passenger file requests and cache sizes modeled on actual data. Comparative analysis evaluates cumulative rewards and losses over 1500 training episodes to analyze learning efficiency, stability, and performance. Results demonstrate DDQN’s superior convergence and 32% higher cumulative rewards than DQN. However, DQN showed potential for gains over successive runs despite inconsistencies. It establishes DRL’s promise for automated decision-making while revealing enhancements to improve DQN. Further research should evaluate generalizability across problem domains, investigate hybrid models, and test physical systems. DDQN emerged as the most efficient algorithm, highlighting DRL’s potential to enable intelligent agents that optimize multimedia delivery.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00500-023-09354-8</doi><tpages>17</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1432-7643 |
ispartof | Soft computing (Berlin, Germany), 2023-12, Vol.27 (24), p.19359-19375 |
issn | 1432-7643 1433-7479 |
language | eng |
recordid | cdi_proquest_journals_2918001223 |
source | Springer Nature - Complete Springer Journals; ProQuest Central |
subjects | Adaptation Algorithms Artificial Intelligence Artificial neural networks Automation Computational Intelligence Computer simulation Control Decision making Deep learning Efficiency Engineering Feedback Intelligent agents Machine learning Mathematical Logic and Foundations Mechatronics Multimedia Neural Networks Optimization Optimization techniques Robotics Stability analysis |
title | Boosting in-transit entertainment: deep reinforcement learning for intelligent multimedia caching in bus networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T02%3A27%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Boosting%20in-transit%20entertainment:%20deep%20reinforcement%20learning%20for%20intelligent%20multimedia%20caching%20in%20bus%20networks&rft.jtitle=Soft%20computing%20(Berlin,%20Germany)&rft.au=Lan,%20Dan&rft.date=2023-12-01&rft.volume=27&rft.issue=24&rft.spage=19359&rft.epage=19375&rft.pages=19359-19375&rft.issn=1432-7643&rft.eissn=1433-7479&rft_id=info:doi/10.1007/s00500-023-09354-8&rft_dat=%3Cproquest_cross%3E2918001223%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918001223&rft_id=info:pmid/&rfr_iscdi=true |