Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach

Vehicular edge computing (VEC) has emerged as a solution that places computing resources at the edge of the network to address resource management, service continuity, and scalability issues in dynamic vehicular environments. However, VEC faces challenges such as task offloading, varying communicati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.131337-131348
Hauptverfasser: Moon, Sungwon, Lim, Yujin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 131348
container_issue
container_start_page 131337
container_title IEEE access
container_volume 12
creator Moon, Sungwon
Lim, Yujin
description Vehicular edge computing (VEC) has emerged as a solution that places computing resources at the edge of the network to address resource management, service continuity, and scalability issues in dynamic vehicular environments. However, VEC faces challenges such as task offloading, varying communication conditions, and data security. To tackle these challenges, federated learning (FL), a distributed machine learning framework that allows multiple clients to collaboratively train a global model without sharing their data, is utilized. However, vehicular clients have characteristics such as non-independent and identically distributed (non-IID) data, diverse communication capabilities, and high mobility, which pose difficulties for model convergence. A dynamic and optimal client selection method is required to address VEC and FL challenges. Therefore, in this paper, we propose a distributed client selection method with multi-objectives that can dynamically adapt to changing conditions. This method combines fuzzy logic with deep reinforcement learning (DRL) based deep Q-network (DQN). Initially, the fuzzy logic approach infers client candidates based on the stability of communication links. Subsequently, the DQN approach selects the final clients by considering the objectives of maximizing model accuracy and minimizing processing time and communication overhead. Unlike conventional methods, the proposed method provides an efficient solution that balances different objectives and improves model performance by ensuring comprehensive network coverage. Consequently, the proposed method achieves higher model accuracy, lower processing time, and reduced communication overhead compared to conventional methods.
doi_str_mv 10.1109/ACCESS.2024.3458991
format Article
fullrecord <record><control><sourceid>doaj_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10679127</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10679127</ieee_id><doaj_id>oai_doaj_org_article_18382387698b46b0a52c7f46f8db34a5</doaj_id><sourcerecordid>oai_doaj_org_article_18382387698b46b0a52c7f46f8db34a5</sourcerecordid><originalsourceid>FETCH-LOGICAL-c261t-49d1cc3a78f5763d864dc57ff827e8340d2b2445066409a007a1878667ca51983</originalsourceid><addsrcrecordid>eNpNkMtOwkAUhhujiQR5Al3MC4Bzv7gjFZSExETU7WSYOYXB0jbTsvDtLWIIZ3NO_pPvW_xZdk_whBBsHqd5PlutJhRTPmFcaGPIVTagRJoxE0xeX9y32ahtd7gf3UdCDbLvvIxQdWgFJfgu1hUq6oTmECC5DgJagktVrDYoVugLttEfSpfQLGwA5fW-OXT97wlN0TNAg94hVj3uYX9UntFp06Ta-e1ddlO4soXR_x5mn_PZR_46Xr69LPLpcuypJN2Ym0C8Z07pQijJgpY8eKGKQlMFmnEc6JpyLrCUHBuHsXJEKy2l8k4Qo9kwW5y8oXY726S4d-nH1i7av6BOG-tSF30JlmimKdNKGr3mco2doF4VXBY6rBl3onexk8unum0TFGcfwfZYvz3Vb4_12__6e-rhREUAuCCkMoQq9gtMOH_e</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Moon, Sungwon ; Lim, Yujin</creator><creatorcontrib>Moon, Sungwon ; Lim, Yujin</creatorcontrib><description>Vehicular edge computing (VEC) has emerged as a solution that places computing resources at the edge of the network to address resource management, service continuity, and scalability issues in dynamic vehicular environments. However, VEC faces challenges such as task offloading, varying communication conditions, and data security. To tackle these challenges, federated learning (FL), a distributed machine learning framework that allows multiple clients to collaboratively train a global model without sharing their data, is utilized. However, vehicular clients have characteristics such as non-independent and identically distributed (non-IID) data, diverse communication capabilities, and high mobility, which pose difficulties for model convergence. A dynamic and optimal client selection method is required to address VEC and FL challenges. Therefore, in this paper, we propose a distributed client selection method with multi-objectives that can dynamically adapt to changing conditions. This method combines fuzzy logic with deep reinforcement learning (DRL) based deep Q-network (DQN). Initially, the fuzzy logic approach infers client candidates based on the stability of communication links. Subsequently, the DQN approach selects the final clients by considering the objectives of maximizing model accuracy and minimizing processing time and communication overhead. Unlike conventional methods, the proposed method provides an efficient solution that balances different objectives and improves model performance by ensuring comprehensive network coverage. Consequently, the proposed method achieves higher model accuracy, lower processing time, and reduced communication overhead compared to conventional methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2024.3458991</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Client selection ; Computational modeling ; Data models ; Deep reinforcement learning ; Edge computing ; Federated learning ; Fuzzy logic ; Servers ; Training ; Vehicle dynamics ; vehicular edge computing</subject><ispartof>IEEE access, 2024, Vol.12, p.131337-131348</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c261t-49d1cc3a78f5763d864dc57ff827e8340d2b2445066409a007a1878667ca51983</cites><orcidid>0000-0002-3076-8040 ; 0000-0003-1255-0387</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10679127$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Moon, Sungwon</creatorcontrib><creatorcontrib>Lim, Yujin</creatorcontrib><title>Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach</title><title>IEEE access</title><addtitle>Access</addtitle><description>Vehicular edge computing (VEC) has emerged as a solution that places computing resources at the edge of the network to address resource management, service continuity, and scalability issues in dynamic vehicular environments. However, VEC faces challenges such as task offloading, varying communication conditions, and data security. To tackle these challenges, federated learning (FL), a distributed machine learning framework that allows multiple clients to collaboratively train a global model without sharing their data, is utilized. However, vehicular clients have characteristics such as non-independent and identically distributed (non-IID) data, diverse communication capabilities, and high mobility, which pose difficulties for model convergence. A dynamic and optimal client selection method is required to address VEC and FL challenges. Therefore, in this paper, we propose a distributed client selection method with multi-objectives that can dynamically adapt to changing conditions. This method combines fuzzy logic with deep reinforcement learning (DRL) based deep Q-network (DQN). Initially, the fuzzy logic approach infers client candidates based on the stability of communication links. Subsequently, the DQN approach selects the final clients by considering the objectives of maximizing model accuracy and minimizing processing time and communication overhead. Unlike conventional methods, the proposed method provides an efficient solution that balances different objectives and improves model performance by ensuring comprehensive network coverage. Consequently, the proposed method achieves higher model accuracy, lower processing time, and reduced communication overhead compared to conventional methods.</description><subject>Accuracy</subject><subject>Client selection</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Deep reinforcement learning</subject><subject>Edge computing</subject><subject>Federated learning</subject><subject>Fuzzy logic</subject><subject>Servers</subject><subject>Training</subject><subject>Vehicle dynamics</subject><subject>vehicular edge computing</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkMtOwkAUhhujiQR5Al3MC4Bzv7gjFZSExETU7WSYOYXB0jbTsvDtLWIIZ3NO_pPvW_xZdk_whBBsHqd5PlutJhRTPmFcaGPIVTagRJoxE0xeX9y32ahtd7gf3UdCDbLvvIxQdWgFJfgu1hUq6oTmECC5DgJagktVrDYoVugLttEfSpfQLGwA5fW-OXT97wlN0TNAg94hVj3uYX9UntFp06Ta-e1ddlO4soXR_x5mn_PZR_46Xr69LPLpcuypJN2Ym0C8Z07pQijJgpY8eKGKQlMFmnEc6JpyLrCUHBuHsXJEKy2l8k4Qo9kwW5y8oXY726S4d-nH1i7av6BOG-tSF30JlmimKdNKGr3mco2doF4VXBY6rBl3onexk8unum0TFGcfwfZYvz3Vb4_12__6e-rhREUAuCCkMoQq9gtMOH_e</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Moon, Sungwon</creator><creator>Lim, Yujin</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-3076-8040</orcidid><orcidid>https://orcid.org/0000-0003-1255-0387</orcidid></search><sort><creationdate>2024</creationdate><title>Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach</title><author>Moon, Sungwon ; Lim, Yujin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c261t-49d1cc3a78f5763d864dc57ff827e8340d2b2445066409a007a1878667ca51983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Client selection</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Deep reinforcement learning</topic><topic>Edge computing</topic><topic>Federated learning</topic><topic>Fuzzy logic</topic><topic>Servers</topic><topic>Training</topic><topic>Vehicle dynamics</topic><topic>vehicular edge computing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Moon, Sungwon</creatorcontrib><creatorcontrib>Lim, Yujin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Moon, Sungwon</au><au>Lim, Yujin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2024</date><risdate>2024</risdate><volume>12</volume><spage>131337</spage><epage>131348</epage><pages>131337-131348</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Vehicular edge computing (VEC) has emerged as a solution that places computing resources at the edge of the network to address resource management, service continuity, and scalability issues in dynamic vehicular environments. However, VEC faces challenges such as task offloading, varying communication conditions, and data security. To tackle these challenges, federated learning (FL), a distributed machine learning framework that allows multiple clients to collaboratively train a global model without sharing their data, is utilized. However, vehicular clients have characteristics such as non-independent and identically distributed (non-IID) data, diverse communication capabilities, and high mobility, which pose difficulties for model convergence. A dynamic and optimal client selection method is required to address VEC and FL challenges. Therefore, in this paper, we propose a distributed client selection method with multi-objectives that can dynamically adapt to changing conditions. This method combines fuzzy logic with deep reinforcement learning (DRL) based deep Q-network (DQN). Initially, the fuzzy logic approach infers client candidates based on the stability of communication links. Subsequently, the DQN approach selects the final clients by considering the objectives of maximizing model accuracy and minimizing processing time and communication overhead. Unlike conventional methods, the proposed method provides an efficient solution that balances different objectives and improves model performance by ensuring comprehensive network coverage. Consequently, the proposed method achieves higher model accuracy, lower processing time, and reduced communication overhead compared to conventional methods.</abstract><pub>IEEE</pub><doi>10.1109/ACCESS.2024.3458991</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3076-8040</orcidid><orcidid>https://orcid.org/0000-0003-1255-0387</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2024, Vol.12, p.131337-131348
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_10679127
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Accuracy
Client selection
Computational modeling
Data models
Deep reinforcement learning
Edge computing
Federated learning
Fuzzy logic
Servers
Training
Vehicle dynamics
vehicular edge computing
title Client Selection for Federated Learning in Vehicular Edge Computing: A Deep Reinforcement Learning Approach
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T09%3A27%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-doaj_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Client%20Selection%20for%20Federated%20Learning%20in%20Vehicular%20Edge%20Computing:%20A%20Deep%20Reinforcement%20Learning%20Approach&rft.jtitle=IEEE%20access&rft.au=Moon,%20Sungwon&rft.date=2024&rft.volume=12&rft.spage=131337&rft.epage=131348&rft.pages=131337-131348&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2024.3458991&rft_dat=%3Cdoaj_ieee_%3Eoai_doaj_org_article_18382387698b46b0a52c7f46f8db34a5%3C/doaj_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10679127&rft_doaj_id=oai_doaj_org_article_18382387698b46b0a52c7f46f8db34a5&rfr_iscdi=true