FedMultimodal: A Benchmark For Multimodal Federated Learning
Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until conve...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-06 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Feng, Tiantian Bose, Digbalay Zhang, Tuo Hebbar, Rajat Ramakrishna, Anil Gupta, Rahul Zhang, Mi Avestimehr, Salman Narayanan, Shrikanth |
description | Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2827359731</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2827359731</sourcerecordid><originalsourceid>FETCH-proquest_journals_28273597313</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwcUtN8S3NKcnMzU9JzLFScFRwSs1LzshNLMpWcMsvUkDIKQBVphYllqSmKPikJhblZeal8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGFkbmxqaW5saExcaoA6882_w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2827359731</pqid></control><display><type>article</type><title>FedMultimodal: A Benchmark For Multimodal Federated Learning</title><source>Free E- Journals</source><creator>Feng, Tiantian ; Bose, Digbalay ; Zhang, Tuo ; Hebbar, Rajat ; Ramakrishna, Anil ; Gupta, Rahul ; Zhang, Mi ; Avestimehr, Salman ; Narayanan, Shrikanth</creator><creatorcontrib>Feng, Tiantian ; Bose, Digbalay ; Zhang, Tuo ; Hebbar, Rajat ; Ramakrishna, Anil ; Gupta, Rahul ; Zhang, Mi ; Avestimehr, Salman ; Narayanan, Shrikanth</creatorcontrib><description>Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Audio data ; Benchmarks ; Computer vision ; Data transmission ; Datasets ; Emotion recognition ; Feature extraction ; Heterogeneity ; Labels ; Machine learning ; Multimedia ; Natural language processing ; Privacy ; Robustness</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Feng, Tiantian</creatorcontrib><creatorcontrib>Bose, Digbalay</creatorcontrib><creatorcontrib>Zhang, Tuo</creatorcontrib><creatorcontrib>Hebbar, Rajat</creatorcontrib><creatorcontrib>Ramakrishna, Anil</creatorcontrib><creatorcontrib>Gupta, Rahul</creatorcontrib><creatorcontrib>Zhang, Mi</creatorcontrib><creatorcontrib>Avestimehr, Salman</creatorcontrib><creatorcontrib>Narayanan, Shrikanth</creatorcontrib><title>FedMultimodal: A Benchmark For Multimodal Federated Learning</title><title>arXiv.org</title><description>Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal.</description><subject>Algorithms</subject><subject>Audio data</subject><subject>Benchmarks</subject><subject>Computer vision</subject><subject>Data transmission</subject><subject>Datasets</subject><subject>Emotion recognition</subject><subject>Feature extraction</subject><subject>Heterogeneity</subject><subject>Labels</subject><subject>Machine learning</subject><subject>Multimedia</subject><subject>Natural language processing</subject><subject>Privacy</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwcUtN8S3NKcnMzU9JzLFScFRwSs1LzshNLMpWcMsvUkDIKQBVphYllqSmKPikJhblZeal8zCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGFkbmxqaW5saExcaoA6882_w</recordid><startdate>20230620</startdate><enddate>20230620</enddate><creator>Feng, Tiantian</creator><creator>Bose, Digbalay</creator><creator>Zhang, Tuo</creator><creator>Hebbar, Rajat</creator><creator>Ramakrishna, Anil</creator><creator>Gupta, Rahul</creator><creator>Zhang, Mi</creator><creator>Avestimehr, Salman</creator><creator>Narayanan, Shrikanth</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230620</creationdate><title>FedMultimodal: A Benchmark For Multimodal Federated Learning</title><author>Feng, Tiantian ; Bose, Digbalay ; Zhang, Tuo ; Hebbar, Rajat ; Ramakrishna, Anil ; Gupta, Rahul ; Zhang, Mi ; Avestimehr, Salman ; Narayanan, Shrikanth</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28273597313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Audio data</topic><topic>Benchmarks</topic><topic>Computer vision</topic><topic>Data transmission</topic><topic>Datasets</topic><topic>Emotion recognition</topic><topic>Feature extraction</topic><topic>Heterogeneity</topic><topic>Labels</topic><topic>Machine learning</topic><topic>Multimedia</topic><topic>Natural language processing</topic><topic>Privacy</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Feng, Tiantian</creatorcontrib><creatorcontrib>Bose, Digbalay</creatorcontrib><creatorcontrib>Zhang, Tuo</creatorcontrib><creatorcontrib>Hebbar, Rajat</creatorcontrib><creatorcontrib>Ramakrishna, Anil</creatorcontrib><creatorcontrib>Gupta, Rahul</creatorcontrib><creatorcontrib>Zhang, Mi</creatorcontrib><creatorcontrib>Avestimehr, Salman</creatorcontrib><creatorcontrib>Narayanan, Shrikanth</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Feng, Tiantian</au><au>Bose, Digbalay</au><au>Zhang, Tuo</au><au>Hebbar, Rajat</au><au>Ramakrishna, Anil</au><au>Gupta, Rahul</au><au>Zhang, Mi</au><au>Avestimehr, Salman</au><au>Narayanan, Shrikanth</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FedMultimodal: A Benchmark For Multimodal Federated Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-06-20</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2827359731 |
source | Free E- Journals |
subjects | Algorithms Audio data Benchmarks Computer vision Data transmission Datasets Emotion recognition Feature extraction Heterogeneity Labels Machine learning Multimedia Natural language processing Privacy Robustness |
title | FedMultimodal: A Benchmark For Multimodal Federated Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T20%3A03%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FedMultimodal:%20A%20Benchmark%20For%20Multimodal%20Federated%20Learning&rft.jtitle=arXiv.org&rft.au=Feng,%20Tiantian&rft.date=2023-06-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2827359731%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2827359731&rft_id=info:pmid/&rfr_iscdi=true |