Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition
In order to obtain more fine-grained information from multiple sub-feature spaces for domain adaptation, this paper proposes a novel multi-representation joint dynamic domain adaptation network (MJDDAN) and applies it to achieve cross-database facial expression recognition. The MJDDAN uses a hybrid...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2024-04, Vol.13 (8), p.1470 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 8 |
container_start_page | 1470 |
container_title | Electronics (Basel) |
container_volume | 13 |
creator | Yan, Jingjie Yue, Yuebo Yu, Kai Zhou, Xiaoyang Liu, Ying Wei, Jinsheng Yang, Yuan |
description | In order to obtain more fine-grained information from multiple sub-feature spaces for domain adaptation, this paper proposes a novel multi-representation joint dynamic domain adaptation network (MJDDAN) and applies it to achieve cross-database facial expression recognition. The MJDDAN uses a hybrid structure to extract multi-representation features and maps the original facial expression features into multiple sub-feature spaces, aligning the expression features of the source domain and target domain in multiple sub-feature spaces from different angles to extract features more comprehensively. Moreover, the MJDDAN proposes the Joint Dynamic Maximum Mean Difference (JD-MMD) model to reduce the difference in feature distribution between different subdomains by simultaneously minimizing the maximum mean difference and local maximum mean difference in each substructure. Three databases, including eNTERFACE, FABO, and RAVDESS, are used to design a large number of cross-database transfer learning facial expression recognition experiments. The accuracy of emotion recognition experiments with eNTERFACE, FABO, and RAVDESS as target domains reach 53.64%, 43.66%, and 35.87%, respectively. Compared to the best comparison method chosen in this article, the accuracy rates were improved by 1.79%, 0.85%, and 1.02%, respectively. |
doi_str_mv | 10.3390/electronics13081470 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3046897318</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A793374972</galeid><sourcerecordid>A793374972</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-bab008665a9b9551b5010796645f185041bcbdf32a0301a3dbd1124d9d5f876a3</originalsourceid><addsrcrecordid>eNptUU1LAzEQDaJgqf0FXgKetyab_cqx9MMPqkLR8zLJZkvqblKTFO2_N0t78ODMYYaZ997wGIRuKZkyxsm96pQMzhotPWWkollJLtAoJSVPeMrTyz_9NZp4vyMxOGUVIyPUvxy6oJON2jvllQkQtDX42WoT8OJooNcSL2wP2uBZA_vz_lWFb-s-cWsdnjvrfbKAAAK8wiuQGjq8_BkE_QDeKGm3Rg_EG3TVQufV5FzH6GO1fJ8_Juu3h6f5bJ1IRmlIBAhCqqLIgQue51TkhEYLRZHlLa1yklEhRdOyFAgjFFgjGkrTrOFN3lZlAWyM7k66e2e_DsqHemcPzsSTNSNZUfGS0SqipifUFjpVa9Pa4EDGbFS0bY1qdZzPSs5YmfEyjQR2IsjBs1NtvXe6B3esKamHX9T__IL9AksPgDI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3046897318</pqid></control><display><type>article</type><title>Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Yan, Jingjie ; Yue, Yuebo ; Yu, Kai ; Zhou, Xiaoyang ; Liu, Ying ; Wei, Jinsheng ; Yang, Yuan</creator><creatorcontrib>Yan, Jingjie ; Yue, Yuebo ; Yu, Kai ; Zhou, Xiaoyang ; Liu, Ying ; Wei, Jinsheng ; Yang, Yuan</creatorcontrib><description>In order to obtain more fine-grained information from multiple sub-feature spaces for domain adaptation, this paper proposes a novel multi-representation joint dynamic domain adaptation network (MJDDAN) and applies it to achieve cross-database facial expression recognition. The MJDDAN uses a hybrid structure to extract multi-representation features and maps the original facial expression features into multiple sub-feature spaces, aligning the expression features of the source domain and target domain in multiple sub-feature spaces from different angles to extract features more comprehensively. Moreover, the MJDDAN proposes the Joint Dynamic Maximum Mean Difference (JD-MMD) model to reduce the difference in feature distribution between different subdomains by simultaneously minimizing the maximum mean difference and local maximum mean difference in each substructure. Three databases, including eNTERFACE, FABO, and RAVDESS, are used to design a large number of cross-database transfer learning facial expression recognition experiments. The accuracy of emotion recognition experiments with eNTERFACE, FABO, and RAVDESS as target domains reach 53.64%, 43.66%, and 35.87%, respectively. Compared to the best comparison method chosen in this article, the accuracy rates were improved by 1.79%, 0.85%, and 1.02%, respectively.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13081470</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Adaptation ; Analysis ; Computer peripherals ; Datasets ; Deep learning ; Emotion recognition ; Experiments ; Face recognition ; Hybrid structures ; Representations</subject><ispartof>Electronics (Basel), 2024-04, Vol.13 (8), p.1470</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-bab008665a9b9551b5010796645f185041bcbdf32a0301a3dbd1124d9d5f876a3</cites><orcidid>0000-0003-2266-6682</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Yan, Jingjie</creatorcontrib><creatorcontrib>Yue, Yuebo</creatorcontrib><creatorcontrib>Yu, Kai</creatorcontrib><creatorcontrib>Zhou, Xiaoyang</creatorcontrib><creatorcontrib>Liu, Ying</creatorcontrib><creatorcontrib>Wei, Jinsheng</creatorcontrib><creatorcontrib>Yang, Yuan</creatorcontrib><title>Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition</title><title>Electronics (Basel)</title><description>In order to obtain more fine-grained information from multiple sub-feature spaces for domain adaptation, this paper proposes a novel multi-representation joint dynamic domain adaptation network (MJDDAN) and applies it to achieve cross-database facial expression recognition. The MJDDAN uses a hybrid structure to extract multi-representation features and maps the original facial expression features into multiple sub-feature spaces, aligning the expression features of the source domain and target domain in multiple sub-feature spaces from different angles to extract features more comprehensively. Moreover, the MJDDAN proposes the Joint Dynamic Maximum Mean Difference (JD-MMD) model to reduce the difference in feature distribution between different subdomains by simultaneously minimizing the maximum mean difference and local maximum mean difference in each substructure. Three databases, including eNTERFACE, FABO, and RAVDESS, are used to design a large number of cross-database transfer learning facial expression recognition experiments. The accuracy of emotion recognition experiments with eNTERFACE, FABO, and RAVDESS as target domains reach 53.64%, 43.66%, and 35.87%, respectively. Compared to the best comparison method chosen in this article, the accuracy rates were improved by 1.79%, 0.85%, and 1.02%, respectively.</description><subject>Adaptation</subject><subject>Analysis</subject><subject>Computer peripherals</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Emotion recognition</subject><subject>Experiments</subject><subject>Face recognition</subject><subject>Hybrid structures</subject><subject>Representations</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUU1LAzEQDaJgqf0FXgKetyab_cqx9MMPqkLR8zLJZkvqblKTFO2_N0t78ODMYYaZ997wGIRuKZkyxsm96pQMzhotPWWkollJLtAoJSVPeMrTyz_9NZp4vyMxOGUVIyPUvxy6oJON2jvllQkQtDX42WoT8OJooNcSL2wP2uBZA_vz_lWFb-s-cWsdnjvrfbKAAAK8wiuQGjq8_BkE_QDeKGm3Rg_EG3TVQufV5FzH6GO1fJ8_Juu3h6f5bJ1IRmlIBAhCqqLIgQue51TkhEYLRZHlLa1yklEhRdOyFAgjFFgjGkrTrOFN3lZlAWyM7k66e2e_DsqHemcPzsSTNSNZUfGS0SqipifUFjpVa9Pa4EDGbFS0bY1qdZzPSs5YmfEyjQR2IsjBs1NtvXe6B3esKamHX9T__IL9AksPgDI</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Yan, Jingjie</creator><creator>Yue, Yuebo</creator><creator>Yu, Kai</creator><creator>Zhou, Xiaoyang</creator><creator>Liu, Ying</creator><creator>Wei, Jinsheng</creator><creator>Yang, Yuan</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0003-2266-6682</orcidid></search><sort><creationdate>20240401</creationdate><title>Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition</title><author>Yan, Jingjie ; Yue, Yuebo ; Yu, Kai ; Zhou, Xiaoyang ; Liu, Ying ; Wei, Jinsheng ; Yang, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-bab008665a9b9551b5010796645f185041bcbdf32a0301a3dbd1124d9d5f876a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation</topic><topic>Analysis</topic><topic>Computer peripherals</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Emotion recognition</topic><topic>Experiments</topic><topic>Face recognition</topic><topic>Hybrid structures</topic><topic>Representations</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yan, Jingjie</creatorcontrib><creatorcontrib>Yue, Yuebo</creatorcontrib><creatorcontrib>Yu, Kai</creatorcontrib><creatorcontrib>Zhou, Xiaoyang</creatorcontrib><creatorcontrib>Liu, Ying</creatorcontrib><creatorcontrib>Wei, Jinsheng</creatorcontrib><creatorcontrib>Yang, Yuan</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yan, Jingjie</au><au>Yue, Yuebo</au><au>Yu, Kai</au><au>Zhou, Xiaoyang</au><au>Liu, Ying</au><au>Wei, Jinsheng</au><au>Yang, Yuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-04-01</date><risdate>2024</risdate><volume>13</volume><issue>8</issue><spage>1470</spage><pages>1470-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>In order to obtain more fine-grained information from multiple sub-feature spaces for domain adaptation, this paper proposes a novel multi-representation joint dynamic domain adaptation network (MJDDAN) and applies it to achieve cross-database facial expression recognition. The MJDDAN uses a hybrid structure to extract multi-representation features and maps the original facial expression features into multiple sub-feature spaces, aligning the expression features of the source domain and target domain in multiple sub-feature spaces from different angles to extract features more comprehensively. Moreover, the MJDDAN proposes the Joint Dynamic Maximum Mean Difference (JD-MMD) model to reduce the difference in feature distribution between different subdomains by simultaneously minimizing the maximum mean difference and local maximum mean difference in each substructure. Three databases, including eNTERFACE, FABO, and RAVDESS, are used to design a large number of cross-database transfer learning facial expression recognition experiments. The accuracy of emotion recognition experiments with eNTERFACE, FABO, and RAVDESS as target domains reach 53.64%, 43.66%, and 35.87%, respectively. Compared to the best comparison method chosen in this article, the accuracy rates were improved by 1.79%, 0.85%, and 1.02%, respectively.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13081470</doi><orcidid>https://orcid.org/0000-0003-2266-6682</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2024-04, Vol.13 (8), p.1470 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_3046897318 |
source | MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals |
subjects | Adaptation Analysis Computer peripherals Datasets Deep learning Emotion recognition Experiments Face recognition Hybrid structures Representations |
title | Multi-Representation Joint Dynamic Domain Adaptation Network for Cross-Database Facial Expression Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T03%3A55%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Representation%20Joint%20Dynamic%20Domain%20Adaptation%20Network%20for%20Cross-Database%20Facial%20Expression%20Recognition&rft.jtitle=Electronics%20(Basel)&rft.au=Yan,%20Jingjie&rft.date=2024-04-01&rft.volume=13&rft.issue=8&rft.spage=1470&rft.pages=1470-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13081470&rft_dat=%3Cgale_proqu%3EA793374972%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3046897318&rft_id=info:pmid/&rft_galeid=A793374972&rfr_iscdi=true |