Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition

Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependenci...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2019-07, Vol.10 (3), p.348-359
Hauptverfasser: Wang, Shangfei, Wu, Shan, Peng, Guozhu, Ji, Qiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 359
container_issue 3
container_start_page 348
container_title IEEE transactions on affective computing
container_volume 10
creator Wang, Shangfei
Wu, Shan
Peng, Guozhu
Ji, Qiang
description Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependencies to recognize multiple AUs. The middle hidden layer of the 4-layer RBM model captures dependencies among image features for multiple AUs, while the top latent units capture the high-order semantic dependencies among AU labels. Furthermore, we extend the proposed 4-layer RBM for facial expression-augmented AU recognition, since AU relations are influenced by expressions. By introducing facial expression nodes in the middle visible layer, facial expressions, which are only required during training, facilitate the estimation of both feature dependencies and label dependencies among AUs. Efficient learning and inference algorithms for the extended model are also developed. Experimental results on three benchmark databases, i.e., the CK+ database, the DISFA database and the SEMAINE database, demonstrate that the proposed approaches can successfully capture complex AU relationships from features and labels jointly, and the expression labels available only during training are benefit for AU recognition during testing for both posed and spontaneous facial expressions.
doi_str_mv 10.1109/TAFFC.2017.2737540
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TAFFC_2017_2737540</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8006290</ieee_id><sourcerecordid>2285327944</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-39149d798b619df2c8a80deca18803565c27d146ec2b83e741afa187bf9b738c3</originalsourceid><addsrcrecordid>eNpNkF9LwzAUxYMoOOa-gL4EfO7MvzbJ4yhWhYmg23NM03RkdE1N2od9ezMn4n25h3vP7144ANxitMQYyYfNqqrKJUGYLwmnPGfoAsywZDKjiOWX__Q1WMS4R6kopQXhM_BZ6mGcgut3sLI6KQt138C1rm0H322nR-f7CD_cYepG3Vs_xe4IWx_gaxq4obOw0sbpDq7MyQq3vRsTaPwuiTS4AVet7qJd_PY52FaPm_I5W789vZSrdWZogceMSsxkw6WoCyyblhihBWqs0VgIRPMiN4Q3mBXWkFpQyxnWbdrxupU1p8LQObg_3x2C_5psHNXeT6FPLxUhIqeES8aSi5xdJvgYg23VENxBh6PCSJ3CVD9hqlOY6jfMBN2dIWet_QMEQgWRiH4D1upwZA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2285327944</pqid></control><display><type>article</type><title>Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Shangfei ; Wu, Shan ; Peng, Guozhu ; Ji, Qiang</creator><creatorcontrib>Wang, Shangfei ; Wu, Shan ; Peng, Guozhu ; Ji, Qiang</creatorcontrib><description>Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependencies to recognize multiple AUs. The middle hidden layer of the 4-layer RBM model captures dependencies among image features for multiple AUs, while the top latent units capture the high-order semantic dependencies among AU labels. Furthermore, we extend the proposed 4-layer RBM for facial expression-augmented AU recognition, since AU relations are influenced by expressions. By introducing facial expression nodes in the middle visible layer, facial expressions, which are only required during training, facilitate the estimation of both feature dependencies and label dependencies among AUs. Efficient learning and inference algorithms for the extended model are also developed. Experimental results on three benchmark databases, i.e., the CK+ database, the DISFA database and the SEMAINE database, demonstrate that the proposed approaches can successfully capture complex AU relationships from features and labels jointly, and the expression labels available only during training are benefit for AU recognition during testing for both posed and spontaneous facial expressions.</description><identifier>ISSN: 1949-3045</identifier><identifier>EISSN: 1949-3045</identifier><identifier>DOI: 10.1109/TAFFC.2017.2737540</identifier><identifier>CODEN: ITACBQ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; AU recognition ; Boltzmann equation ; expression-augmented ; Face recognition ; Feature recognition ; Labels ; Learning (artificial intelligence) ; Machine learning ; RBM ; Training</subject><ispartof>IEEE transactions on affective computing, 2019-07, Vol.10 (3), p.348-359</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-39149d798b619df2c8a80deca18803565c27d146ec2b83e741afa187bf9b738c3</citedby><cites>FETCH-LOGICAL-c361t-39149d798b619df2c8a80deca18803565c27d146ec2b83e741afa187bf9b738c3</cites><orcidid>0000-0003-1164-9895 ; 0000-0002-4302-2889 ; 0000-0002-1001-8744</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8006290$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8006290$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Shangfei</creatorcontrib><creatorcontrib>Wu, Shan</creatorcontrib><creatorcontrib>Peng, Guozhu</creatorcontrib><creatorcontrib>Ji, Qiang</creatorcontrib><title>Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition</title><title>IEEE transactions on affective computing</title><addtitle>T-AFFC</addtitle><description>Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependencies to recognize multiple AUs. The middle hidden layer of the 4-layer RBM model captures dependencies among image features for multiple AUs, while the top latent units capture the high-order semantic dependencies among AU labels. Furthermore, we extend the proposed 4-layer RBM for facial expression-augmented AU recognition, since AU relations are influenced by expressions. By introducing facial expression nodes in the middle visible layer, facial expressions, which are only required during training, facilitate the estimation of both feature dependencies and label dependencies among AUs. Efficient learning and inference algorithms for the extended model are also developed. Experimental results on three benchmark databases, i.e., the CK+ database, the DISFA database and the SEMAINE database, demonstrate that the proposed approaches can successfully capture complex AU relationships from features and labels jointly, and the expression labels available only during training are benefit for AU recognition during testing for both posed and spontaneous facial expressions.</description><subject>Algorithms</subject><subject>AU recognition</subject><subject>Boltzmann equation</subject><subject>expression-augmented</subject><subject>Face recognition</subject><subject>Feature recognition</subject><subject>Labels</subject><subject>Learning (artificial intelligence)</subject><subject>Machine learning</subject><subject>RBM</subject><subject>Training</subject><issn>1949-3045</issn><issn>1949-3045</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkF9LwzAUxYMoOOa-gL4EfO7MvzbJ4yhWhYmg23NM03RkdE1N2od9ezMn4n25h3vP7144ANxitMQYyYfNqqrKJUGYLwmnPGfoAsywZDKjiOWX__Q1WMS4R6kopQXhM_BZ6mGcgut3sLI6KQt138C1rm0H322nR-f7CD_cYepG3Vs_xe4IWx_gaxq4obOw0sbpDq7MyQq3vRsTaPwuiTS4AVet7qJd_PY52FaPm_I5W789vZSrdWZogceMSsxkw6WoCyyblhihBWqs0VgIRPMiN4Q3mBXWkFpQyxnWbdrxupU1p8LQObg_3x2C_5psHNXeT6FPLxUhIqeES8aSi5xdJvgYg23VENxBh6PCSJ3CVD9hqlOY6jfMBN2dIWet_QMEQgWRiH4D1upwZA</recordid><startdate>20190701</startdate><enddate>20190701</enddate><creator>Wang, Shangfei</creator><creator>Wu, Shan</creator><creator>Peng, Guozhu</creator><creator>Ji, Qiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1164-9895</orcidid><orcidid>https://orcid.org/0000-0002-4302-2889</orcidid><orcidid>https://orcid.org/0000-0002-1001-8744</orcidid></search><sort><creationdate>20190701</creationdate><title>Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition</title><author>Wang, Shangfei ; Wu, Shan ; Peng, Guozhu ; Ji, Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-39149d798b619df2c8a80deca18803565c27d146ec2b83e741afa187bf9b738c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>AU recognition</topic><topic>Boltzmann equation</topic><topic>expression-augmented</topic><topic>Face recognition</topic><topic>Feature recognition</topic><topic>Labels</topic><topic>Learning (artificial intelligence)</topic><topic>Machine learning</topic><topic>RBM</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Shangfei</creatorcontrib><creatorcontrib>Wu, Shan</creatorcontrib><creatorcontrib>Peng, Guozhu</creatorcontrib><creatorcontrib>Ji, Qiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on affective computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Shangfei</au><au>Wu, Shan</au><au>Peng, Guozhu</au><au>Ji, Qiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition</atitle><jtitle>IEEE transactions on affective computing</jtitle><stitle>T-AFFC</stitle><date>2019-07-01</date><risdate>2019</risdate><volume>10</volume><issue>3</issue><spage>348</spage><epage>359</epage><pages>348-359</pages><issn>1949-3045</issn><eissn>1949-3045</eissn><coden>ITACBQ</coden><abstract>Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependencies to recognize multiple AUs. The middle hidden layer of the 4-layer RBM model captures dependencies among image features for multiple AUs, while the top latent units capture the high-order semantic dependencies among AU labels. Furthermore, we extend the proposed 4-layer RBM for facial expression-augmented AU recognition, since AU relations are influenced by expressions. By introducing facial expression nodes in the middle visible layer, facial expressions, which are only required during training, facilitate the estimation of both feature dependencies and label dependencies among AUs. Efficient learning and inference algorithms for the extended model are also developed. Experimental results on three benchmark databases, i.e., the CK+ database, the DISFA database and the SEMAINE database, demonstrate that the proposed approaches can successfully capture complex AU relationships from features and labels jointly, and the expression labels available only during training are benefit for AU recognition during testing for both posed and spontaneous facial expressions.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TAFFC.2017.2737540</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-1164-9895</orcidid><orcidid>https://orcid.org/0000-0002-4302-2889</orcidid><orcidid>https://orcid.org/0000-0002-1001-8744</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1949-3045
ispartof IEEE transactions on affective computing, 2019-07, Vol.10 (3), p.348-359
issn 1949-3045
1949-3045
language eng
recordid cdi_crossref_primary_10_1109_TAFFC_2017_2737540
source IEEE Electronic Library (IEL)
subjects Algorithms
AU recognition
Boltzmann equation
expression-augmented
Face recognition
Feature recognition
Labels
Learning (artificial intelligence)
Machine learning
RBM
Training
title Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T05%3A07%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Capturing%20Feature%20and%20Label%20Relations%20Simultaneously%20for%20Multiple%20Facial%20Action%20Unit%20Recognition&rft.jtitle=IEEE%20transactions%20on%20affective%20computing&rft.au=Wang,%20Shangfei&rft.date=2019-07-01&rft.volume=10&rft.issue=3&rft.spage=348&rft.epage=359&rft.pages=348-359&rft.issn=1949-3045&rft.eissn=1949-3045&rft.coden=ITACBQ&rft_id=info:doi/10.1109/TAFFC.2017.2737540&rft_dat=%3Cproquest_RIE%3E2285327944%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2285327944&rft_id=info:pmid/&rft_ieee_id=8006290&rfr_iscdi=true