Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks

Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE open journal of vehicular technology 2024, Vol.5, p.721-736
Hauptverfasser: Nakayima, Olivia, Soliman, Mostafa I., Ueda, Kazunori, Mohamed, Samir A. Elsagheer
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 736
container_issue
container_start_page 721
container_title IEEE open journal of vehicular technology
container_volume 5
creator Nakayima, Olivia
Soliman, Mostafa I.
Ueda, Kazunori
Mohamed, Samir A. Elsagheer
description Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.
doi_str_mv 10.1109/OJVT.2024.3396637
format Article
fullrecord <record><control><sourceid>doaj_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_OJVT_2024_3396637</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10518068</ieee_id><doaj_id>oai_doaj_org_article_c87e04b5d6a04a0e8b04194d7ab56217</doaj_id><sourcerecordid>oai_doaj_org_article_c87e04b5d6a04a0e8b04194d7ab56217</sourcerecordid><originalsourceid>FETCH-LOGICAL-c327t-6adbd6e74293b37bfb28e3b64b85007293908ca13f173ddc00b221d047b1870a3</originalsourceid><addsrcrecordid>eNpNkd1uEzEQhS0EElXoAyBx4RfYMP5Ze_cSpQWKIipBKJfW2J5NXDbryLuoyjPw0myagno1o6PvfDeHsbcClkJA-_72y91mKUHqpVKtMcq-YBfSaF0JpeDls_81uxzHewCQtRBC2Qv2Z5X3Pg1p2PLvuZsesFB1RV0aKHIcIr-iHo_VJvdUcJj4V5oecvl1wld5CHSYRv4zTbuZowP_Rmnocgm0p5ldE5ZH8YbCbsh93h75lPn1sMO5ye9ol8LvHss_6fiGveqwH-ny6S7Yj4_Xm9Xnan376Wb1YV0FJe1UGYw-GrJatsor6zsvG1LeaN_UAHZOW2gCCtUJq2IMAF5KEUFbLxoLqBbs5uyNGe_doaQ9lqPLmNxjkMvWYZlS6MmFxhJoX0eDoBGo8aBFq6NFXxs5-xdMnF2h5HEs1P33CXCncdxpHHcaxz2NM3fenTuJiJ7xtWjANOov73KMzA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Nakayima, Olivia ; Soliman, Mostafa I. ; Ueda, Kazunori ; Mohamed, Samir A. Elsagheer</creator><creatorcontrib>Nakayima, Olivia ; Soliman, Mostafa I. ; Ueda, Kazunori ; Mohamed, Samir A. Elsagheer</creatorcontrib><description>Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.</description><identifier>ISSN: 2644-1330</identifier><identifier>EISSN: 2644-1330</identifier><identifier>DOI: 10.1109/OJVT.2024.3396637</identifier><identifier>CODEN: IOJVAO</identifier><language>eng</language><publisher>IEEE</publisher><subject>Delay-tolerant networks ; Epidemics ; Optimization ; performance analysis ; reinforcement learning ; Routing ; Routing protocols ; Security ; simulator ; software-defined networking ; Vehicle dynamics ; Vehicular ad hoc networks</subject><ispartof>IEEE open journal of vehicular technology, 2024, Vol.5, p.721-736</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c327t-6adbd6e74293b37bfb28e3b64b85007293908ca13f173ddc00b221d047b1870a3</cites><orcidid>0000-0002-3424-1844 ; 0000-0002-4386-8235 ; 0000-0003-4388-1998 ; 0009-0008-8378-7369</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10518068$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Nakayima, Olivia</creatorcontrib><creatorcontrib>Soliman, Mostafa I.</creatorcontrib><creatorcontrib>Ueda, Kazunori</creatorcontrib><creatorcontrib>Mohamed, Samir A. Elsagheer</creatorcontrib><title>Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks</title><title>IEEE open journal of vehicular technology</title><addtitle>OJVT</addtitle><description>Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.</description><subject>Delay-tolerant networks</subject><subject>Epidemics</subject><subject>Optimization</subject><subject>performance analysis</subject><subject>reinforcement learning</subject><subject>Routing</subject><subject>Routing protocols</subject><subject>Security</subject><subject>simulator</subject><subject>software-defined networking</subject><subject>Vehicle dynamics</subject><subject>Vehicular ad hoc networks</subject><issn>2644-1330</issn><issn>2644-1330</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkd1uEzEQhS0EElXoAyBx4RfYMP5Ze_cSpQWKIipBKJfW2J5NXDbryLuoyjPw0myagno1o6PvfDeHsbcClkJA-_72y91mKUHqpVKtMcq-YBfSaF0JpeDls_81uxzHewCQtRBC2Qv2Z5X3Pg1p2PLvuZsesFB1RV0aKHIcIr-iHo_VJvdUcJj4V5oecvl1wld5CHSYRv4zTbuZowP_Rmnocgm0p5ldE5ZH8YbCbsh93h75lPn1sMO5ye9ol8LvHss_6fiGveqwH-ny6S7Yj4_Xm9Xnan376Wb1YV0FJe1UGYw-GrJatsor6zsvG1LeaN_UAHZOW2gCCtUJq2IMAF5KEUFbLxoLqBbs5uyNGe_doaQ9lqPLmNxjkMvWYZlS6MmFxhJoX0eDoBGo8aBFq6NFXxs5-xdMnF2h5HEs1P33CXCncdxpHHcaxz2NM3fenTuJiJ7xtWjANOov73KMzA</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Nakayima, Olivia</creator><creator>Soliman, Mostafa I.</creator><creator>Ueda, Kazunori</creator><creator>Mohamed, Samir A. Elsagheer</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-3424-1844</orcidid><orcidid>https://orcid.org/0000-0002-4386-8235</orcidid><orcidid>https://orcid.org/0000-0003-4388-1998</orcidid><orcidid>https://orcid.org/0009-0008-8378-7369</orcidid></search><sort><creationdate>2024</creationdate><title>Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks</title><author>Nakayima, Olivia ; Soliman, Mostafa I. ; Ueda, Kazunori ; Mohamed, Samir A. Elsagheer</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c327t-6adbd6e74293b37bfb28e3b64b85007293908ca13f173ddc00b221d047b1870a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Delay-tolerant networks</topic><topic>Epidemics</topic><topic>Optimization</topic><topic>performance analysis</topic><topic>reinforcement learning</topic><topic>Routing</topic><topic>Routing protocols</topic><topic>Security</topic><topic>simulator</topic><topic>software-defined networking</topic><topic>Vehicle dynamics</topic><topic>Vehicular ad hoc networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nakayima, Olivia</creatorcontrib><creatorcontrib>Soliman, Mostafa I.</creatorcontrib><creatorcontrib>Ueda, Kazunori</creatorcontrib><creatorcontrib>Mohamed, Samir A. Elsagheer</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE open journal of vehicular technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nakayima, Olivia</au><au>Soliman, Mostafa I.</au><au>Ueda, Kazunori</au><au>Mohamed, Samir A. Elsagheer</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks</atitle><jtitle>IEEE open journal of vehicular technology</jtitle><stitle>OJVT</stitle><date>2024</date><risdate>2024</risdate><volume>5</volume><spage>721</spage><epage>736</epage><pages>721-736</pages><issn>2644-1330</issn><eissn>2644-1330</eissn><coden>IOJVAO</coden><abstract>Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.</abstract><pub>IEEE</pub><doi>10.1109/OJVT.2024.3396637</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-3424-1844</orcidid><orcidid>https://orcid.org/0000-0002-4386-8235</orcidid><orcidid>https://orcid.org/0000-0003-4388-1998</orcidid><orcidid>https://orcid.org/0009-0008-8378-7369</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2644-1330
ispartof IEEE open journal of vehicular technology, 2024, Vol.5, p.721-736
issn 2644-1330
2644-1330
language eng
recordid cdi_crossref_primary_10_1109_OJVT_2024_3396637
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Delay-tolerant networks
Epidemics
Optimization
performance analysis
reinforcement learning
Routing
Routing protocols
Security
simulator
software-defined networking
Vehicle dynamics
Vehicular ad hoc networks
title Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T00%3A35%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-doaj_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Combining%20Software-Defined%20and%20Delay-Tolerant%20Networking%20Concepts%20With%20Deep%20Reinforcement%20Learning%20Technology%20to%20Enhance%20Vehicular%20Networks&rft.jtitle=IEEE%20open%20journal%20of%20vehicular%20technology&rft.au=Nakayima,%20Olivia&rft.date=2024&rft.volume=5&rft.spage=721&rft.epage=736&rft.pages=721-736&rft.issn=2644-1330&rft.eissn=2644-1330&rft.coden=IOJVAO&rft_id=info:doi/10.1109/OJVT.2024.3396637&rft_dat=%3Cdoaj_cross%3Eoai_doaj_org_article_c87e04b5d6a04a0e8b04194d7ab56217%3C/doaj_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10518068&rft_doaj_id=oai_doaj_org_article_c87e04b5d6a04a0e8b04194d7ab56217&rfr_iscdi=true