Leveraging recent advances in deep learning for audio-Visual emotion recognition

•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition letters 2021-06, Vol.146, p.1-7
Hauptverfasser: Schoneveld, Liam, Othmani, Alice, Abdelkawy, Hazem
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7
container_issue
container_start_page 1
container_title Pattern recognition letters
container_volume 146
creator Schoneveld, Liam
Othmani, Alice
Abdelkawy, Hazem
description •A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition task.•Applying knowledge distillation (specifically, self-distillation), alongside additional unlabeled data for FER.•Learning the spatio-temporal dynamics via a recurrent neural network for AVER. Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
doi_str_mv 10.1016/j.patrec.2021.03.007
format Article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04032955v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167865521000878</els_id><sourcerecordid>2533404050</sourcerecordid><originalsourceid>FETCH-LOGICAL-c414t-f8393dc058c87eb21b2c5d3d2d84c7fe3c4b0c8c59a5a652ffe2c9212213e27e3</originalsourceid><addsrcrecordid>eNp9kE9LwzAYh4MoOKffwEPBk4fW_F3TizCGOqGgB_UasuTtTOmamXQDv70pFY-e8hKe34_3fRC6JrggmCzu2mKvhwCmoJiSArMC4_IEzYgsaV4yzk_RLGFlLhdCnKOLGFuM8YJVcoZeazhC0FvXb7PUAP2QaXvUvYGYuT6zAPusAx36EWh8yPTBOp9_uHjQXQY7Pzjfj0m_7d04X6KzRncRrn7fOXp_fHhbrfP65el5taxzwwkf8kayilmDhTSyhA0lG2qEZZZayU3ZADN8g400otJCLwRtGqCmooRSwoCWwObodur91J3aB7fT4Vt57dR6WavxD3PMaCXEkST2ZmL3wX8dIA6q9YfQp_UUFYzxhAqcKD5RJvgYAzR_tQSr0bNq1eRZjZ4VZip5TrH7KQbp2qODoKJxkARal9BBWe_-L_gBteWH1g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2533404050</pqid></control><display><type>article</type><title>Leveraging recent advances in deep learning for audio-Visual emotion recognition</title><source>Elsevier ScienceDirect Journals</source><creator>Schoneveld, Liam ; Othmani, Alice ; Abdelkawy, Hazem</creator><creatorcontrib>Schoneveld, Liam ; Othmani, Alice ; Abdelkawy, Hazem</creatorcontrib><description>•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition task.•Applying knowledge distillation (specifically, self-distillation), alongside additional unlabeled data for FER.•Learning the spatio-temporal dynamics via a recurrent neural network for AVER. Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.</description><identifier>ISSN: 0167-8655</identifier><identifier>EISSN: 1872-7344</identifier><identifier>DOI: 10.1016/j.patrec.2021.03.007</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Affective computing ; Artificial Intelligence ; Audiovisual emotion recognition ; Computer Science ; Datasets ; Deep learning ; Distillation ; Emotion recognition ; Emotional behavior ; Emotional factors ; Emotions ; Feature extraction ; Human behavior ; Human behavior recognition ; Machine learning ; Neural networks ; Recurrent neural networks ; Verbal communication ; Video sequences ; Visual discrimination learning</subject><ispartof>Pattern recognition letters, 2021-06, Vol.146, p.1-7</ispartof><rights>2021 Elsevier B.V.</rights><rights>Copyright Elsevier Science Ltd. Jun 2021</rights><rights>Attribution - NonCommercial</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c414t-f8393dc058c87eb21b2c5d3d2d84c7fe3c4b0c8c59a5a652ffe2c9212213e27e3</citedby><cites>FETCH-LOGICAL-c414t-f8393dc058c87eb21b2c5d3d2d84c7fe3c4b0c8c59a5a652ffe2c9212213e27e3</cites><orcidid>0000-0002-7324-6234</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.patrec.2021.03.007$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,778,782,883,3539,27907,27908,45978</link.rule.ids><backlink>$$Uhttps://hal.u-pec.fr/hal-04032955$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Schoneveld, Liam</creatorcontrib><creatorcontrib>Othmani, Alice</creatorcontrib><creatorcontrib>Abdelkawy, Hazem</creatorcontrib><title>Leveraging recent advances in deep learning for audio-Visual emotion recognition</title><title>Pattern recognition letters</title><description>•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition task.•Applying knowledge distillation (specifically, self-distillation), alongside additional unlabeled data for FER.•Learning the spatio-temporal dynamics via a recurrent neural network for AVER. Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.</description><subject>Affective computing</subject><subject>Artificial Intelligence</subject><subject>Audiovisual emotion recognition</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Distillation</subject><subject>Emotion recognition</subject><subject>Emotional behavior</subject><subject>Emotional factors</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Human behavior</subject><subject>Human behavior recognition</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Recurrent neural networks</subject><subject>Verbal communication</subject><subject>Video sequences</subject><subject>Visual discrimination learning</subject><issn>0167-8655</issn><issn>1872-7344</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LwzAYh4MoOKffwEPBk4fW_F3TizCGOqGgB_UasuTtTOmamXQDv70pFY-e8hKe34_3fRC6JrggmCzu2mKvhwCmoJiSArMC4_IEzYgsaV4yzk_RLGFlLhdCnKOLGFuM8YJVcoZeazhC0FvXb7PUAP2QaXvUvYGYuT6zAPusAx36EWh8yPTBOp9_uHjQXQY7Pzjfj0m_7d04X6KzRncRrn7fOXp_fHhbrfP65el5taxzwwkf8kayilmDhTSyhA0lG2qEZZZayU3ZADN8g400otJCLwRtGqCmooRSwoCWwObodur91J3aB7fT4Vt57dR6WavxD3PMaCXEkST2ZmL3wX8dIA6q9YfQp_UUFYzxhAqcKD5RJvgYAzR_tQSr0bNq1eRZjZ4VZip5TrH7KQbp2qODoKJxkARal9BBWe_-L_gBteWH1g</recordid><startdate>202106</startdate><enddate>202106</enddate><creator>Schoneveld, Liam</creator><creator>Othmani, Alice</creator><creator>Abdelkawy, Hazem</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TK</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0002-7324-6234</orcidid></search><sort><creationdate>202106</creationdate><title>Leveraging recent advances in deep learning for audio-Visual emotion recognition</title><author>Schoneveld, Liam ; Othmani, Alice ; Abdelkawy, Hazem</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c414t-f8393dc058c87eb21b2c5d3d2d84c7fe3c4b0c8c59a5a652ffe2c9212213e27e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Affective computing</topic><topic>Artificial Intelligence</topic><topic>Audiovisual emotion recognition</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Distillation</topic><topic>Emotion recognition</topic><topic>Emotional behavior</topic><topic>Emotional factors</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Human behavior</topic><topic>Human behavior recognition</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Recurrent neural networks</topic><topic>Verbal communication</topic><topic>Video sequences</topic><topic>Visual discrimination learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Schoneveld, Liam</creatorcontrib><creatorcontrib>Othmani, Alice</creatorcontrib><creatorcontrib>Abdelkawy, Hazem</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Pattern recognition letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schoneveld, Liam</au><au>Othmani, Alice</au><au>Abdelkawy, Hazem</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Leveraging recent advances in deep learning for audio-Visual emotion recognition</atitle><jtitle>Pattern recognition letters</jtitle><date>2021-06</date><risdate>2021</risdate><volume>146</volume><spage>1</spage><epage>7</epage><pages>1-7</pages><issn>0167-8655</issn><eissn>1872-7344</eissn><abstract>•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition task.•Applying knowledge distillation (specifically, self-distillation), alongside additional unlabeled data for FER.•Learning the spatio-temporal dynamics via a recurrent neural network for AVER. Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.patrec.2021.03.007</doi><tpages>7</tpages><orcidid>https://orcid.org/0000-0002-7324-6234</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0167-8655
ispartof Pattern recognition letters, 2021-06, Vol.146, p.1-7
issn 0167-8655
1872-7344
language eng
recordid cdi_hal_primary_oai_HAL_hal_04032955v1
source Elsevier ScienceDirect Journals
subjects Affective computing
Artificial Intelligence
Audiovisual emotion recognition
Computer Science
Datasets
Deep learning
Distillation
Emotion recognition
Emotional behavior
Emotional factors
Emotions
Feature extraction
Human behavior
Human behavior recognition
Machine learning
Neural networks
Recurrent neural networks
Verbal communication
Video sequences
Visual discrimination learning
title Leveraging recent advances in deep learning for audio-Visual emotion recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T18%3A18%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Leveraging%20recent%20advances%20in%20deep%20learning%20for%20audio-Visual%20emotion%20recognition&rft.jtitle=Pattern%20recognition%20letters&rft.au=Schoneveld,%20Liam&rft.date=2021-06&rft.volume=146&rft.spage=1&rft.epage=7&rft.pages=1-7&rft.issn=0167-8655&rft.eissn=1872-7344&rft_id=info:doi/10.1016/j.patrec.2021.03.007&rft_dat=%3Cproquest_hal_p%3E2533404050%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2533404050&rft_id=info:pmid/&rft_els_id=S0167865521000878&rfr_iscdi=true