M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training

Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well-being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real-world scenarios is the issue of distribution shift. This is the pheno...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies mobile, wearable and ubiquitous technologies, 2024-05, Vol.8 (2), p.1-30, Article 46
Hauptverfasser: Meegahapola, Lakmal, Hassoune, Hamza, Gatica-Perez, Daniel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 30
container_issue 2
container_start_page 1
container_title Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies
container_volume 8
creator Meegahapola, Lakmal
Hassoune, Hamza
Gatica-Perez, Daniel
description Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well-being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real-world scenarios is the issue of distribution shift. This is the phenomenon where the distribution of data in the training set differs from the distribution of data in the real world---the deployment environment. While extensively explored in computer vision and natural language processing, and while prior research in mobile sensing briefly addresses this concern, current work primarily focuses on models dealing with a single modality of data, such as audio or accelerometer readings, and consequently, there is little research on unsupervised domain adaptation when dealing with multimodal sensor data. To address this gap, we did extensive experiments with domain adversarial neural networks (DANN) showing that they can effectively handle distribution shifts in multimodal sensor data. Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches. Through extensive experiments conducted on two multimodal mobile sensing datasets, three inference tasks, and 14 source-target domain pairs, including both regression and classification, we demonstrate that our approach performs effectively on unseen domains. Compared to directly deploying a model trained in the source domain to the target domain, the model shows performance increases up to 12% AUC (area under the receiver operating characteristics curves) on classification tasks, and up to 0.13 MAE (mean absolute error) on regression tasks.
doi_str_mv 10.1145/3659591
format Article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3659591</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3659591</sourcerecordid><originalsourceid>FETCH-LOGICAL-a239t-2ec111339dd8290b0c68f7f8fae3aee35e657ff2d752b18940559d2d0ef7c7793</originalsourceid><addsrcrecordid>eNpNkD1PwzAYhC0EElWp2Jm8MQX8EccxW1vKh9SIgXSO3Pg1dZXElZ0W8e8JakFMd9I9d8MhdE3JHaWpuOeZUELRMzRiqUwTJTJ5_s9fokmMW0IIVZznRI7QtuCzafmAV13c7yAcXASDH32rXYenRu963TvfYesDLvZN71pvdIMLv3YN4Hfoous-8KfrN8c4mQXd1ZuheoAQdXADXIZhbMCu0IXVTYTJScdo9bQo5y_J8u35dT5dJppx1ScMakop58qYnCmyJnWWW2lzq4FrAC4gE9JaZqRga5qrlAihDDMErKylVHyMbo-7dfAxBrDVLrhWh6-Kkurnper00kDeHEldt3_Qb_gNvIpiLA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training</title><source>ACM Digital Library Complete</source><creator>Meegahapola, Lakmal ; Hassoune, Hamza ; Gatica-Perez, Daniel</creator><creatorcontrib>Meegahapola, Lakmal ; Hassoune, Hamza ; Gatica-Perez, Daniel</creatorcontrib><description>Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well-being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real-world scenarios is the issue of distribution shift. This is the phenomenon where the distribution of data in the training set differs from the distribution of data in the real world---the deployment environment. While extensively explored in computer vision and natural language processing, and while prior research in mobile sensing briefly addresses this concern, current work primarily focuses on models dealing with a single modality of data, such as audio or accelerometer readings, and consequently, there is little research on unsupervised domain adaptation when dealing with multimodal sensor data. To address this gap, we did extensive experiments with domain adversarial neural networks (DANN) showing that they can effectively handle distribution shifts in multimodal sensor data. Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches. Through extensive experiments conducted on two multimodal mobile sensing datasets, three inference tasks, and 14 source-target domain pairs, including both regression and classification, we demonstrate that our approach performs effectively on unseen domains. Compared to directly deploying a model trained in the source domain to the target domain, the model shows performance increases up to 12% AUC (area under the receiver operating characteristics curves) on classification tasks, and up to 0.13 MAE (mean absolute error) on regression tasks.</description><identifier>ISSN: 2474-9567</identifier><identifier>EISSN: 2474-9567</identifier><identifier>DOI: 10.1145/3659591</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Computing methodologies ; Human-centered computing ; Learning paradigms ; Learning under covariate shift ; Machine learning ; Multi-task learning ; Transfer learning ; Ubiquitous and mobile computing ; Ubiquitous and mobile computing theory, concepts and paradigms ; Ubiquitous computing</subject><ispartof>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2024-05, Vol.8 (2), p.1-30, Article 46</ispartof><rights>ACM</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a239t-2ec111339dd8290b0c68f7f8fae3aee35e657ff2d752b18940559d2d0ef7c7793</cites><orcidid>0000-0001-5488-2182 ; 0009-0000-0239-9228 ; 0000-0002-5275-6585</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3659591$$EPDF$$P50$$Gacm$$H</linktopdf><link.rule.ids>314,780,784,2280,27923,27924,40195,75999</link.rule.ids></links><search><creatorcontrib>Meegahapola, Lakmal</creatorcontrib><creatorcontrib>Hassoune, Hamza</creatorcontrib><creatorcontrib>Gatica-Perez, Daniel</creatorcontrib><title>M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training</title><title>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</title><addtitle>ACM IMWUT</addtitle><description>Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well-being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real-world scenarios is the issue of distribution shift. This is the phenomenon where the distribution of data in the training set differs from the distribution of data in the real world---the deployment environment. While extensively explored in computer vision and natural language processing, and while prior research in mobile sensing briefly addresses this concern, current work primarily focuses on models dealing with a single modality of data, such as audio or accelerometer readings, and consequently, there is little research on unsupervised domain adaptation when dealing with multimodal sensor data. To address this gap, we did extensive experiments with domain adversarial neural networks (DANN) showing that they can effectively handle distribution shifts in multimodal sensor data. Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches. Through extensive experiments conducted on two multimodal mobile sensing datasets, three inference tasks, and 14 source-target domain pairs, including both regression and classification, we demonstrate that our approach performs effectively on unseen domains. Compared to directly deploying a model trained in the source domain to the target domain, the model shows performance increases up to 12% AUC (area under the receiver operating characteristics curves) on classification tasks, and up to 0.13 MAE (mean absolute error) on regression tasks.</description><subject>Computing methodologies</subject><subject>Human-centered computing</subject><subject>Learning paradigms</subject><subject>Learning under covariate shift</subject><subject>Machine learning</subject><subject>Multi-task learning</subject><subject>Transfer learning</subject><subject>Ubiquitous and mobile computing</subject><subject>Ubiquitous and mobile computing theory, concepts and paradigms</subject><subject>Ubiquitous computing</subject><issn>2474-9567</issn><issn>2474-9567</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkD1PwzAYhC0EElWp2Jm8MQX8EccxW1vKh9SIgXSO3Pg1dZXElZ0W8e8JakFMd9I9d8MhdE3JHaWpuOeZUELRMzRiqUwTJTJ5_s9fokmMW0IIVZznRI7QtuCzafmAV13c7yAcXASDH32rXYenRu963TvfYesDLvZN71pvdIMLv3YN4Hfoous-8KfrN8c4mQXd1ZuheoAQdXADXIZhbMCu0IXVTYTJScdo9bQo5y_J8u35dT5dJppx1ScMakop58qYnCmyJnWWW2lzq4FrAC4gE9JaZqRga5qrlAihDDMErKylVHyMbo-7dfAxBrDVLrhWh6-Kkurnper00kDeHEldt3_Qb_gNvIpiLA</recordid><startdate>20240515</startdate><enddate>20240515</enddate><creator>Meegahapola, Lakmal</creator><creator>Hassoune, Hamza</creator><creator>Gatica-Perez, Daniel</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-5488-2182</orcidid><orcidid>https://orcid.org/0009-0000-0239-9228</orcidid><orcidid>https://orcid.org/0000-0002-5275-6585</orcidid></search><sort><creationdate>20240515</creationdate><title>M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training</title><author>Meegahapola, Lakmal ; Hassoune, Hamza ; Gatica-Perez, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a239t-2ec111339dd8290b0c68f7f8fae3aee35e657ff2d752b18940559d2d0ef7c7793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computing methodologies</topic><topic>Human-centered computing</topic><topic>Learning paradigms</topic><topic>Learning under covariate shift</topic><topic>Machine learning</topic><topic>Multi-task learning</topic><topic>Transfer learning</topic><topic>Ubiquitous and mobile computing</topic><topic>Ubiquitous and mobile computing theory, concepts and paradigms</topic><topic>Ubiquitous computing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Meegahapola, Lakmal</creatorcontrib><creatorcontrib>Hassoune, Hamza</creatorcontrib><creatorcontrib>Gatica-Perez, Daniel</creatorcontrib><collection>CrossRef</collection><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Meegahapola, Lakmal</au><au>Hassoune, Hamza</au><au>Gatica-Perez, Daniel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training</atitle><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle><stitle>ACM IMWUT</stitle><date>2024-05-15</date><risdate>2024</risdate><volume>8</volume><issue>2</issue><spage>1</spage><epage>30</epage><pages>1-30</pages><artnum>46</artnum><issn>2474-9567</issn><eissn>2474-9567</eissn><abstract>Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well-being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real-world scenarios is the issue of distribution shift. This is the phenomenon where the distribution of data in the training set differs from the distribution of data in the real world---the deployment environment. While extensively explored in computer vision and natural language processing, and while prior research in mobile sensing briefly addresses this concern, current work primarily focuses on models dealing with a single modality of data, such as audio or accelerometer readings, and consequently, there is little research on unsupervised domain adaptation when dealing with multimodal sensor data. To address this gap, we did extensive experiments with domain adversarial neural networks (DANN) showing that they can effectively handle distribution shifts in multimodal sensor data. Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches. Through extensive experiments conducted on two multimodal mobile sensing datasets, three inference tasks, and 14 source-target domain pairs, including both regression and classification, we demonstrate that our approach performs effectively on unseen domains. Compared to directly deploying a model trained in the source domain to the target domain, the model shows performance increases up to 12% AUC (area under the receiver operating characteristics curves) on classification tasks, and up to 0.13 MAE (mean absolute error) on regression tasks.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3659591</doi><tpages>30</tpages><orcidid>https://orcid.org/0000-0001-5488-2182</orcidid><orcidid>https://orcid.org/0009-0000-0239-9228</orcidid><orcidid>https://orcid.org/0000-0002-5275-6585</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2474-9567
ispartof Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2024-05, Vol.8 (2), p.1-30, Article 46
issn 2474-9567
2474-9567
language eng
recordid cdi_crossref_primary_10_1145_3659591
source ACM Digital Library Complete
subjects Computing methodologies
Human-centered computing
Learning paradigms
Learning under covariate shift
Machine learning
Multi-task learning
Transfer learning
Ubiquitous and mobile computing
Ubiquitous and mobile computing theory, concepts and paradigms
Ubiquitous computing
title M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T20%3A33%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=M3BAT:%20Unsupervised%20Domain%20Adaptation%20for%20Multimodal%20Mobile%20Sensing%20with%20Multi-Branch%20Adversarial%20Training&rft.jtitle=Proceedings%20of%20ACM%20on%20interactive,%20mobile,%20wearable%20and%20ubiquitous%20technologies&rft.au=Meegahapola,%20Lakmal&rft.date=2024-05-15&rft.volume=8&rft.issue=2&rft.spage=1&rft.epage=30&rft.pages=1-30&rft.artnum=46&rft.issn=2474-9567&rft.eissn=2474-9567&rft_id=info:doi/10.1145/3659591&rft_dat=%3Cacm_cross%3E3659591%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true