Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning

Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial informatics 2019-07, Vol.15 (7), p.3952-3961
Hauptverfasser: Hong, Chaoqun, Yu, Jun, Zhang, Jian, Jin, Xiongnan, Lee, Kyong-Ho
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3961
container_issue 7
container_start_page 3952
container_title IEEE transactions on industrial informatics
container_volume 15
creator Hong, Chaoqun
Yu, Jun
Zhang, Jian
Jin, Xiongnan
Lee, Kyong-Ho
description Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning (\text{M}^2\text{DL}). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of \text{M}^2\text{DL}.
doi_str_mv 10.1109/TII.2018.2884211
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8554134</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8554134</ieee_id><sourcerecordid>2253469232</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-5760188da59c074e2d34d4b3cc16db8a3a1b53e56db525415b41e3edb82147ed3</originalsourceid><addsrcrecordid>eNo9UD1PwzAQtRBIlMKOxGKJOcXns5tkRKWFSq1gKGK0nPgKKSEudjrw73FpxXRf7929e4xdgxgBiPJuNZ-PpIBiJItCSYATNoBSQSaEFqcp1xoylALP2UWMGyEwF1gO2HS5a_vmyzvb8pmtKXvxkfg0pp7tG9_xt6b_4H-g3sZPvrRds_at4w9EW74gG7qme79kZ2vbRro6xiF7nU1Xk6ds8fw4n9wvslqW0Gc6HyeFhbO6rEWuSDpUTlVY1zB2VWHRQqWRdCq01Ap0pYCQ0kiCysnhkN0e9m6D_95R7M3G70KXThopNapxKVEmlDig6uBjDLQ225DeCT8GhNmbZZJZZm-WOZqVKDcHSkNE__BCJxGo8Bd8a2Q3</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2253469232</pqid></control><display><type>article</type><title>Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Hong, Chaoqun ; Yu, Jun ; Zhang, Jian ; Jin, Xiongnan ; Lee, Kyong-Ho</creator><creatorcontrib>Hong, Chaoqun ; Yu, Jun ; Zhang, Jian ; Jin, Xiongnan ; Lee, Kyong-Ho</creatorcontrib><description><![CDATA[Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning (<inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of <inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>.]]></description><identifier>ISSN: 1551-3203</identifier><identifier>EISSN: 1941-0050</identifier><identifier>DOI: 10.1109/TII.2018.2884211</identifier><identifier>CODEN: ITIICH</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Convolutional neural networks (CNNs) ; Deep learning ; Face ; face-pose estimation ; Feature extraction ; Informatics ; low-rank learning ; Machine learning ; Manifolds ; Mapping ; multitask learning ; Neurons ; Pose estimation ; Representations ; Task analysis ; Visibility</subject><ispartof>IEEE transactions on industrial informatics, 2019-07, Vol.15 (7), p.3952-3961</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-5760188da59c074e2d34d4b3cc16db8a3a1b53e56db525415b41e3edb82147ed3</citedby><cites>FETCH-LOGICAL-c291t-5760188da59c074e2d34d4b3cc16db8a3a1b53e56db525415b41e3edb82147ed3</cites><orcidid>0000-0003-1922-7283 ; 0000-0001-5080-1883 ; 0000-0002-2417-1889 ; 0000-0002-1581-917X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8554134$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8554134$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Hong, Chaoqun</creatorcontrib><creatorcontrib>Yu, Jun</creatorcontrib><creatorcontrib>Zhang, Jian</creatorcontrib><creatorcontrib>Jin, Xiongnan</creatorcontrib><creatorcontrib>Lee, Kyong-Ho</creatorcontrib><title>Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning</title><title>IEEE transactions on industrial informatics</title><addtitle>TII</addtitle><description><![CDATA[Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning (<inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of <inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>.]]></description><subject>Artificial neural networks</subject><subject>Convolutional neural networks (CNNs)</subject><subject>Deep learning</subject><subject>Face</subject><subject>face-pose estimation</subject><subject>Feature extraction</subject><subject>Informatics</subject><subject>low-rank learning</subject><subject>Machine learning</subject><subject>Manifolds</subject><subject>Mapping</subject><subject>multitask learning</subject><subject>Neurons</subject><subject>Pose estimation</subject><subject>Representations</subject><subject>Task analysis</subject><subject>Visibility</subject><issn>1551-3203</issn><issn>1941-0050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9UD1PwzAQtRBIlMKOxGKJOcXns5tkRKWFSq1gKGK0nPgKKSEudjrw73FpxXRf7929e4xdgxgBiPJuNZ-PpIBiJItCSYATNoBSQSaEFqcp1xoylALP2UWMGyEwF1gO2HS5a_vmyzvb8pmtKXvxkfg0pp7tG9_xt6b_4H-g3sZPvrRds_at4w9EW74gG7qme79kZ2vbRro6xiF7nU1Xk6ds8fw4n9wvslqW0Gc6HyeFhbO6rEWuSDpUTlVY1zB2VWHRQqWRdCq01Ap0pYCQ0kiCysnhkN0e9m6D_95R7M3G70KXThopNapxKVEmlDig6uBjDLQ225DeCT8GhNmbZZJZZm-WOZqVKDcHSkNE__BCJxGo8Bd8a2Q3</recordid><startdate>20190701</startdate><enddate>20190701</enddate><creator>Hong, Chaoqun</creator><creator>Yu, Jun</creator><creator>Zhang, Jian</creator><creator>Jin, Xiongnan</creator><creator>Lee, Kyong-Ho</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1922-7283</orcidid><orcidid>https://orcid.org/0000-0001-5080-1883</orcidid><orcidid>https://orcid.org/0000-0002-2417-1889</orcidid><orcidid>https://orcid.org/0000-0002-1581-917X</orcidid></search><sort><creationdate>20190701</creationdate><title>Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning</title><author>Hong, Chaoqun ; Yu, Jun ; Zhang, Jian ; Jin, Xiongnan ; Lee, Kyong-Ho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-5760188da59c074e2d34d4b3cc16db8a3a1b53e56db525415b41e3edb82147ed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Convolutional neural networks (CNNs)</topic><topic>Deep learning</topic><topic>Face</topic><topic>face-pose estimation</topic><topic>Feature extraction</topic><topic>Informatics</topic><topic>low-rank learning</topic><topic>Machine learning</topic><topic>Manifolds</topic><topic>Mapping</topic><topic>multitask learning</topic><topic>Neurons</topic><topic>Pose estimation</topic><topic>Representations</topic><topic>Task analysis</topic><topic>Visibility</topic><toplevel>online_resources</toplevel><creatorcontrib>Hong, Chaoqun</creatorcontrib><creatorcontrib>Yu, Jun</creatorcontrib><creatorcontrib>Zhang, Jian</creatorcontrib><creatorcontrib>Jin, Xiongnan</creatorcontrib><creatorcontrib>Lee, Kyong-Ho</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on industrial informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hong, Chaoqun</au><au>Yu, Jun</au><au>Zhang, Jian</au><au>Jin, Xiongnan</au><au>Lee, Kyong-Ho</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning</atitle><jtitle>IEEE transactions on industrial informatics</jtitle><stitle>TII</stitle><date>2019-07-01</date><risdate>2019</risdate><volume>15</volume><issue>7</issue><spage>3952</spage><epage>3961</epage><pages>3952-3961</pages><issn>1551-3203</issn><eissn>1941-0050</eissn><coden>ITIICH</coden><abstract><![CDATA[Face-pose estimation aims at estimating the gazing direction with two-dimensional face images. It gives important communicative information and visual saliency. However, it is challenging because of lights, background, face orientations, and appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we use multimodal data and propose a novel face-pose estimation framework named multitask manifold deep learning (<inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>). It is based on feature extraction with improved convolutional neural networks (CNNs) and multimodal mapping relationship with multitask learning. In the proposed CNNs, manifold regularized convolutional layers learn the relationship between outputs of neurons in a low-rank space. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined by applying multitask learning with incoherent sparse and low-rank learning with a least-squares loss. Experimental results on three challenging benchmark datasets demonstrate the performance of <inline-formula><tex-math notation="LaTeX">\text{M}^2\text{DL}</tex-math></inline-formula>.]]></abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TII.2018.2884211</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0003-1922-7283</orcidid><orcidid>https://orcid.org/0000-0001-5080-1883</orcidid><orcidid>https://orcid.org/0000-0002-2417-1889</orcidid><orcidid>https://orcid.org/0000-0002-1581-917X</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1551-3203
ispartof IEEE transactions on industrial informatics, 2019-07, Vol.15 (7), p.3952-3961
issn 1551-3203
1941-0050
language eng
recordid cdi_ieee_primary_8554134
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
Convolutional neural networks (CNNs)
Deep learning
Face
face-pose estimation
Feature extraction
Informatics
low-rank learning
Machine learning
Manifolds
Mapping
multitask learning
Neurons
Pose estimation
Representations
Task analysis
Visibility
title Multimodal Face-Pose Estimation With Multitask Manifold Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T16%3A25%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multimodal%20Face-Pose%20Estimation%20With%20Multitask%20Manifold%20Deep%20Learning&rft.jtitle=IEEE%20transactions%20on%20industrial%20informatics&rft.au=Hong,%20Chaoqun&rft.date=2019-07-01&rft.volume=15&rft.issue=7&rft.spage=3952&rft.epage=3961&rft.pages=3952-3961&rft.issn=1551-3203&rft.eissn=1941-0050&rft.coden=ITIICH&rft_id=info:doi/10.1109/TII.2018.2884211&rft_dat=%3Cproquest_RIE%3E2253469232%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2253469232&rft_id=info:pmid/&rft_ieee_id=8554134&rfr_iscdi=true