Multi-modal Emotion Estimation for in-the-wild Videos

In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Our method utilizes the multi-modal information, i.e., the visual and audio information, and employs a temporal encoder to model the t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Meng, Liyu, Liu, Yuchen, Liu, Xiaolong, Huang, Zhaopei, Cheng, Yuan, Wang, Meng, Liu, Chuanhe, Jin, Qin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Meng, Liyu
Liu, Yuchen
Liu, Xiaolong
Huang, Zhaopei
Cheng, Yuan
Wang, Meng
Liu, Chuanhe
Jin, Qin
description In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Our method utilizes the multi-modal information, i.e., the visual and audio information, and employs a temporal encoder to model the temporal context in the videos. Besides, a smooth processor is applied to get more reasonable predictions, and a model ensemble strategy is used to improve the performance of our proposed method. The experiment results show that our method achieves 65.55% ccc for valence and 70.88% ccc for arousal on the validation set of the Aff-Wild2 dataset, which prove the effectiveness of our proposed method.
doi_str_mv 10.48550/arxiv.2203.13032
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_13032</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_13032</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-dfeb9d3a94af0d6f358e91718724ba99158fceb593187cc738b23e0a24dc232f3</originalsourceid><addsrcrecordid>eNotzrFuwjAUBVAvHSroB3QiP-BgvxcTe0QobZGoWCLW6CW2haUEV0ko8PfQ0Ole3eHqMPYuRZpppcSS-mv4TQEEphIFwitT3-d2DLyLltqk6OIY4ikphjF0NFUf-ySc-Hh0_BJamxyCdXGYsxdP7eDe_nPGyo-i3Hzx3f5zu1nvOK1y4Na72lgkk5EXduVRaWdkLnUOWU3GSKV942pl8DE1TY66BnSCILMNIHicscXzdnJXP_1D1d-qP381-fEOmgE_jQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multi-modal Emotion Estimation for in-the-wild Videos</title><source>arXiv.org</source><creator>Meng, Liyu ; Liu, Yuchen ; Liu, Xiaolong ; Huang, Zhaopei ; Cheng, Yuan ; Wang, Meng ; Liu, Chuanhe ; Jin, Qin</creator><creatorcontrib>Meng, Liyu ; Liu, Yuchen ; Liu, Xiaolong ; Huang, Zhaopei ; Cheng, Yuan ; Wang, Meng ; Liu, Chuanhe ; Jin, Qin</creatorcontrib><description>In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Our method utilizes the multi-modal information, i.e., the visual and audio information, and employs a temporal encoder to model the temporal context in the videos. Besides, a smooth processor is applied to get more reasonable predictions, and a model ensemble strategy is used to improve the performance of our proposed method. The experiment results show that our method achieves 65.55% ccc for valence and 70.88% ccc for arousal on the validation set of the Aff-Wild2 dataset, which prove the effectiveness of our proposed method.</description><identifier>DOI: 10.48550/arxiv.2203.13032</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.13032$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.13032$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Meng, Liyu</creatorcontrib><creatorcontrib>Liu, Yuchen</creatorcontrib><creatorcontrib>Liu, Xiaolong</creatorcontrib><creatorcontrib>Huang, Zhaopei</creatorcontrib><creatorcontrib>Cheng, Yuan</creatorcontrib><creatorcontrib>Wang, Meng</creatorcontrib><creatorcontrib>Liu, Chuanhe</creatorcontrib><creatorcontrib>Jin, Qin</creatorcontrib><title>Multi-modal Emotion Estimation for in-the-wild Videos</title><description>In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Our method utilizes the multi-modal information, i.e., the visual and audio information, and employs a temporal encoder to model the temporal context in the videos. Besides, a smooth processor is applied to get more reasonable predictions, and a model ensemble strategy is used to improve the performance of our proposed method. The experiment results show that our method achieves 65.55% ccc for valence and 70.88% ccc for arousal on the validation set of the Aff-Wild2 dataset, which prove the effectiveness of our proposed method.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrFuwjAUBVAvHSroB3QiP-BgvxcTe0QobZGoWCLW6CW2haUEV0ko8PfQ0Ole3eHqMPYuRZpppcSS-mv4TQEEphIFwitT3-d2DLyLltqk6OIY4ikphjF0NFUf-ySc-Hh0_BJamxyCdXGYsxdP7eDe_nPGyo-i3Hzx3f5zu1nvOK1y4Na72lgkk5EXduVRaWdkLnUOWU3GSKV942pl8DE1TY66BnSCILMNIHicscXzdnJXP_1D1d-qP381-fEOmgE_jQ</recordid><startdate>20220324</startdate><enddate>20220324</enddate><creator>Meng, Liyu</creator><creator>Liu, Yuchen</creator><creator>Liu, Xiaolong</creator><creator>Huang, Zhaopei</creator><creator>Cheng, Yuan</creator><creator>Wang, Meng</creator><creator>Liu, Chuanhe</creator><creator>Jin, Qin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220324</creationdate><title>Multi-modal Emotion Estimation for in-the-wild Videos</title><author>Meng, Liyu ; Liu, Yuchen ; Liu, Xiaolong ; Huang, Zhaopei ; Cheng, Yuan ; Wang, Meng ; Liu, Chuanhe ; Jin, Qin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-dfeb9d3a94af0d6f358e91718724ba99158fceb593187cc738b23e0a24dc232f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Meng, Liyu</creatorcontrib><creatorcontrib>Liu, Yuchen</creatorcontrib><creatorcontrib>Liu, Xiaolong</creatorcontrib><creatorcontrib>Huang, Zhaopei</creatorcontrib><creatorcontrib>Cheng, Yuan</creatorcontrib><creatorcontrib>Wang, Meng</creatorcontrib><creatorcontrib>Liu, Chuanhe</creatorcontrib><creatorcontrib>Jin, Qin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Meng, Liyu</au><au>Liu, Yuchen</au><au>Liu, Xiaolong</au><au>Huang, Zhaopei</au><au>Cheng, Yuan</au><au>Wang, Meng</au><au>Liu, Chuanhe</au><au>Jin, Qin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-modal Emotion Estimation for in-the-wild Videos</atitle><date>2022-03-24</date><risdate>2022</risdate><abstract>In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Our method utilizes the multi-modal information, i.e., the visual and audio information, and employs a temporal encoder to model the temporal context in the videos. Besides, a smooth processor is applied to get more reasonable predictions, and a model ensemble strategy is used to improve the performance of our proposed method. The experiment results show that our method achieves 65.55% ccc for valence and 70.88% ccc for arousal on the validation set of the Aff-Wild2 dataset, which prove the effectiveness of our proposed method.</abstract><doi>10.48550/arxiv.2203.13032</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.13032
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_13032
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Multi-modal Emotion Estimation for in-the-wild Videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T18%3A10%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-modal%20Emotion%20Estimation%20for%20in-the-wild%20Videos&rft.au=Meng,%20Liyu&rft.date=2022-03-24&rft_id=info:doi/10.48550/arxiv.2203.13032&rft_dat=%3Carxiv_GOX%3E2203_13032%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true