Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition
Understanding human affective states such as emotion and stress is crucial for both practical applications and theoretical research, driving advancements in the field of affective computing. While traditional approaches often rely on generalized models trained on aggregated data, recent studies high...
Gespeichert in:
Veröffentlicht in: | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies mobile, wearable and ubiquitous technologies, 2024-11, Vol.8 (4), p.1-35, Article 206 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 35 |
---|---|
container_issue | 4 |
container_start_page | 1 |
container_title | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies |
container_volume | 8 |
creator | Han, Yunjo Zhang, Panyu Park, Minseo Lee, Uichin |
description | Understanding human affective states such as emotion and stress is crucial for both practical applications and theoretical research, driving advancements in the field of affective computing. While traditional approaches often rely on generalized models trained on aggregated data, recent studies highlight the importance of personalized models that account for individual differences in affective responses. However, there remains a significant gap in research regarding the comparative evaluation of various personalization techniques across multiple datasets. In this study, we address this gap by systematically evaluating widely-used deep learning-based personalization techniques for affect recognition across five open datasets (i.e., AMIGOS, ASCERTAIN, WESAD, CASE, and K-EmoCon). Our analysis focuses on realistic scenarios where models must adapt to new, unseen users with limited available data, reflecting real-world conditions. We emphasize the principles of reproducibility by utilizing open datasets and making our evaluation models and codebase publicly available. Our findings provide critical insights into the generalizability of personalization techniques, the data requirements for effective personalization, and the relative performance of different approaches. This work offers valuable contributions to the development of personalized affect recognition systems, fostering advancements in both methodology and practical application. |
doi_str_mv | 10.1145/3699724 |
format | Article |
fullrecord | <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3699724</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3699724</sourcerecordid><originalsourceid>FETCH-LOGICAL-a844-d10f695d308c751d57038acd1afc831f58f3c50b3c3d94eb49d88816ccb865b3</originalsourceid><addsrcrecordid>eNpNkE1LAzEYhIMoWGrx7ik3T6tJk-wmx1JrFVb8qPcl--ZNWdndlGQV6q-3pVU8zcA8M4ch5JKzG86luhW5McVUnpDRVBYyMyovTv_5czJJ6YMxxo0QmhUj8rrapgE7OzRAF1-2_dy50NPg6QvGFHrbNt_o6B3ihpZoY9_0a_oUHLaJ-hDpzHuEgb4hhHXf7LsX5MzbNuHkqGOyul-8zx-y8nn5OJ-VmdVSZo4znxvlBNNQKO5UwYS24Lj1oAX3SnsBitUChDMSa2mc1prnALXOVS3G5PqwCjGkFNFXm9h0Nm4rzqr9FdXxih15dSAtdH_Qb_gD2CRY0w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition</title><source>Access via ACM Digital Library</source><creator>Han, Yunjo ; Zhang, Panyu ; Park, Minseo ; Lee, Uichin</creator><creatorcontrib>Han, Yunjo ; Zhang, Panyu ; Park, Minseo ; Lee, Uichin</creatorcontrib><description>Understanding human affective states such as emotion and stress is crucial for both practical applications and theoretical research, driving advancements in the field of affective computing. While traditional approaches often rely on generalized models trained on aggregated data, recent studies highlight the importance of personalized models that account for individual differences in affective responses. However, there remains a significant gap in research regarding the comparative evaluation of various personalization techniques across multiple datasets. In this study, we address this gap by systematically evaluating widely-used deep learning-based personalization techniques for affect recognition across five open datasets (i.e., AMIGOS, ASCERTAIN, WESAD, CASE, and K-EmoCon). Our analysis focuses on realistic scenarios where models must adapt to new, unseen users with limited available data, reflecting real-world conditions. We emphasize the principles of reproducibility by utilizing open datasets and making our evaluation models and codebase publicly available. Our findings provide critical insights into the generalizability of personalization techniques, the data requirements for effective personalization, and the relative performance of different approaches. This work offers valuable contributions to the development of personalized affect recognition systems, fostering advancements in both methodology and practical application.</description><identifier>ISSN: 2474-9567</identifier><identifier>EISSN: 2474-9567</identifier><identifier>DOI: 10.1145/3699724</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Applied computing ; Human-centered computing ; Life and medical sciences ; Ubiquitous and mobile computing</subject><ispartof>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2024-11, Vol.8 (4), p.1-35, Article 206</ispartof><rights>Owner/Author</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a844-d10f695d308c751d57038acd1afc831f58f3c50b3c3d94eb49d88816ccb865b3</cites><orcidid>0000-0002-7014-6940 ; 0000-0002-8981-1293 ; 0009-0002-8612-1219 ; 0000-0002-1888-1569</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3699724$$EPDF$$P50$$Gacm$$Hfree_for_read</linktopdf><link.rule.ids>314,780,784,2282,27924,27925,40196,76228</link.rule.ids></links><search><creatorcontrib>Han, Yunjo</creatorcontrib><creatorcontrib>Zhang, Panyu</creatorcontrib><creatorcontrib>Park, Minseo</creatorcontrib><creatorcontrib>Lee, Uichin</creatorcontrib><title>Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition</title><title>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</title><addtitle>ACM IMWUT</addtitle><description>Understanding human affective states such as emotion and stress is crucial for both practical applications and theoretical research, driving advancements in the field of affective computing. While traditional approaches often rely on generalized models trained on aggregated data, recent studies highlight the importance of personalized models that account for individual differences in affective responses. However, there remains a significant gap in research regarding the comparative evaluation of various personalization techniques across multiple datasets. In this study, we address this gap by systematically evaluating widely-used deep learning-based personalization techniques for affect recognition across five open datasets (i.e., AMIGOS, ASCERTAIN, WESAD, CASE, and K-EmoCon). Our analysis focuses on realistic scenarios where models must adapt to new, unseen users with limited available data, reflecting real-world conditions. We emphasize the principles of reproducibility by utilizing open datasets and making our evaluation models and codebase publicly available. Our findings provide critical insights into the generalizability of personalization techniques, the data requirements for effective personalization, and the relative performance of different approaches. This work offers valuable contributions to the development of personalized affect recognition systems, fostering advancements in both methodology and practical application.</description><subject>Applied computing</subject><subject>Human-centered computing</subject><subject>Life and medical sciences</subject><subject>Ubiquitous and mobile computing</subject><issn>2474-9567</issn><issn>2474-9567</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkE1LAzEYhIMoWGrx7ik3T6tJk-wmx1JrFVb8qPcl--ZNWdndlGQV6q-3pVU8zcA8M4ch5JKzG86luhW5McVUnpDRVBYyMyovTv_5czJJ6YMxxo0QmhUj8rrapgE7OzRAF1-2_dy50NPg6QvGFHrbNt_o6B3ihpZoY9_0a_oUHLaJ-hDpzHuEgb4hhHXf7LsX5MzbNuHkqGOyul-8zx-y8nn5OJ-VmdVSZo4znxvlBNNQKO5UwYS24Lj1oAX3SnsBitUChDMSa2mc1prnALXOVS3G5PqwCjGkFNFXm9h0Nm4rzqr9FdXxih15dSAtdH_Qb_gD2CRY0w</recordid><startdate>20241121</startdate><enddate>20241121</enddate><creator>Han, Yunjo</creator><creator>Zhang, Panyu</creator><creator>Park, Minseo</creator><creator>Lee, Uichin</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-7014-6940</orcidid><orcidid>https://orcid.org/0000-0002-8981-1293</orcidid><orcidid>https://orcid.org/0009-0002-8612-1219</orcidid><orcidid>https://orcid.org/0000-0002-1888-1569</orcidid></search><sort><creationdate>20241121</creationdate><title>Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition</title><author>Han, Yunjo ; Zhang, Panyu ; Park, Minseo ; Lee, Uichin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a844-d10f695d308c751d57038acd1afc831f58f3c50b3c3d94eb49d88816ccb865b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Applied computing</topic><topic>Human-centered computing</topic><topic>Life and medical sciences</topic><topic>Ubiquitous and mobile computing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Han, Yunjo</creatorcontrib><creatorcontrib>Zhang, Panyu</creatorcontrib><creatorcontrib>Park, Minseo</creatorcontrib><creatorcontrib>Lee, Uichin</creatorcontrib><collection>CrossRef</collection><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Han, Yunjo</au><au>Zhang, Panyu</au><au>Park, Minseo</au><au>Lee, Uichin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition</atitle><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle><stitle>ACM IMWUT</stitle><date>2024-11-21</date><risdate>2024</risdate><volume>8</volume><issue>4</issue><spage>1</spage><epage>35</epage><pages>1-35</pages><artnum>206</artnum><issn>2474-9567</issn><eissn>2474-9567</eissn><abstract>Understanding human affective states such as emotion and stress is crucial for both practical applications and theoretical research, driving advancements in the field of affective computing. While traditional approaches often rely on generalized models trained on aggregated data, recent studies highlight the importance of personalized models that account for individual differences in affective responses. However, there remains a significant gap in research regarding the comparative evaluation of various personalization techniques across multiple datasets. In this study, we address this gap by systematically evaluating widely-used deep learning-based personalization techniques for affect recognition across five open datasets (i.e., AMIGOS, ASCERTAIN, WESAD, CASE, and K-EmoCon). Our analysis focuses on realistic scenarios where models must adapt to new, unseen users with limited available data, reflecting real-world conditions. We emphasize the principles of reproducibility by utilizing open datasets and making our evaluation models and codebase publicly available. Our findings provide critical insights into the generalizability of personalization techniques, the data requirements for effective personalization, and the relative performance of different approaches. This work offers valuable contributions to the development of personalized affect recognition systems, fostering advancements in both methodology and practical application.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3699724</doi><tpages>35</tpages><orcidid>https://orcid.org/0000-0002-7014-6940</orcidid><orcidid>https://orcid.org/0000-0002-8981-1293</orcidid><orcidid>https://orcid.org/0009-0002-8612-1219</orcidid><orcidid>https://orcid.org/0000-0002-1888-1569</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2474-9567 |
ispartof | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2024-11, Vol.8 (4), p.1-35, Article 206 |
issn | 2474-9567 2474-9567 |
language | eng |
recordid | cdi_crossref_primary_10_1145_3699724 |
source | Access via ACM Digital Library |
subjects | Applied computing Human-centered computing Life and medical sciences Ubiquitous and mobile computing |
title | Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T21%3A44%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Systematic%20Evaluation%20of%20Personalized%20Deep%20Learning%20Models%20for%20Affect%20Recognition&rft.jtitle=Proceedings%20of%20ACM%20on%20interactive,%20mobile,%20wearable%20and%20ubiquitous%20technologies&rft.au=Han,%20Yunjo&rft.date=2024-11-21&rft.volume=8&rft.issue=4&rft.spage=1&rft.epage=35&rft.pages=1-35&rft.artnum=206&rft.issn=2474-9567&rft.eissn=2474-9567&rft_id=info:doi/10.1145/3699724&rft_dat=%3Cacm_cross%3E3699724%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |