Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network

Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three k...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.184537-184551
Hauptverfasser: Song, Baolin, Li, Ke, Zong, Yuan, Zhu, Jie, Zheng, Wenming, Shi, Jingang, Zhao, Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 184551
container_issue
container_start_page 184537
container_title IEEE access
container_volume 7
creator Song, Baolin
Li, Ke
Zong, Yuan
Zhu, Jie
Zheng, Wenming
Shi, Jingang
Zhao, Li
description Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three key frames of ME videos. We design a dynamic-temporal stream, static-spatial stream, and local-spatial stream module for the TSCNN that respectively attempt to learn and integrate temporal, entire facial region, and facial local region cues in ME videos with the goal of recognizing MEs. In addition, to allow the TSCNN to recognize MEs without using the index values of apex frames, we design a reliable apex frame detection algorithm. Extensive experiments are conducted with five public ME databases: CASME II, SMIC-HS, SAMM, CAS(ME) 2 , and CASME. Our proposed TSCNN is shown to achieve more promising recognition results when compared with many other methods.
doi_str_mv 10.1109/ACCESS.2019.2960629
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2455636522</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8936434</ieee_id><doaj_id>oai_doaj_org_article_5cced9e35d7c40b58b396c1fa3ca44f6</doaj_id><sourcerecordid>2455636522</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-36c17128f569f0fea3213566a5cf78ff4924396db5f0f0e24a130bd52a8547373</originalsourceid><addsrcrecordid>eNpNUclOwzAQjRBIIOgX9BKJc4r3xEcUlUVikSi9cLEcZ1xSQlzshO3rcRuEmMuMZt57o5mXJFOMZhgjeXZelvPFYkYQljMiBRJE7iVHBAuZUU7F_r_6MJmEsEYxitji-VHy9ADGrbrmu-lW6WLjul534IaQ3jbGu2z-ufEQQuO6dBm2EJ0-PnuAbNF70K9p6bp31w59BOg2vYPB71L_4fzLSXJgdRtg8puPk-XF_LG8ym7uL6_L85vMMFT0GRUG55gUlgtpkQVNCaZcCM2NzQtrmSSMSlFXPE4REKYxRVXNiS44y2lOj5PrUbd2eq02vnnV_ks53ahdw_mV0r5vTAuKGwO1BMrrPC6veFFFZYOtpkYzZkXUOh21Nt69DRB6tXaDj7cFRRjnggpOSETRERVfFIIH-7cVI7X1RI2eqK0n6teTyJqOrAYA_hiFpIJRRn8AJoKIHw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455636522</pqid></control><display><type>article</type><title>Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network</title><source>DOAJ Directory of Open Access Journals</source><source>IEEE Xplore Open Access Journals</source><source>EZB Electronic Journals Library</source><creator>Song, Baolin ; Li, Ke ; Zong, Yuan ; Zhu, Jie ; Zheng, Wenming ; Shi, Jingang ; Zhao, Li</creator><creatorcontrib>Song, Baolin ; Li, Ke ; Zong, Yuan ; Zhu, Jie ; Zheng, Wenming ; Shi, Jingang ; Zhao, Li</creatorcontrib><description>Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three key frames of ME videos. We design a dynamic-temporal stream, static-spatial stream, and local-spatial stream module for the TSCNN that respectively attempt to learn and integrate temporal, entire facial region, and facial local region cues in ME videos with the goal of recognizing MEs. In addition, to allow the TSCNN to recognize MEs without using the index values of apex frames, we design a reliable apex frame detection algorithm. Extensive experiments are conducted with five public ME databases: CASME II, SMIC-HS, SAMM, CAS(ME) 2 , and CASME. Our proposed TSCNN is shown to achieve more promising recognition results when compared with many other methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2960629</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; apex frame location ; Apexes ; Artificial neural networks ; convolutional neural networks ; Data mining ; Face recognition ; Feature extraction ; Frame design ; Frames (data processing) ; Micro-expression recognition ; Neural networks ; Recognition ; spatiotemporal information ; Spatiotemporal phenomena ; Task analysis ; Video ; Videos</subject><ispartof>IEEE access, 2019, Vol.7, p.184537-184551</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-36c17128f569f0fea3213566a5cf78ff4924396db5f0f0e24a130bd52a8547373</citedby><cites>FETCH-LOGICAL-c408t-36c17128f569f0fea3213566a5cf78ff4924396db5f0f0e24a130bd52a8547373</cites><orcidid>0000-0002-7764-5179 ; 0000-0002-4424-8505 ; 0000-0002-9874-0167 ; 0000-0003-2133-4651 ; 0000-0001-7070-6365 ; 0000-0002-0839-8792 ; 0000-0002-7145-4718</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8936434$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Song, Baolin</creatorcontrib><creatorcontrib>Li, Ke</creatorcontrib><creatorcontrib>Zong, Yuan</creatorcontrib><creatorcontrib>Zhu, Jie</creatorcontrib><creatorcontrib>Zheng, Wenming</creatorcontrib><creatorcontrib>Shi, Jingang</creatorcontrib><creatorcontrib>Zhao, Li</creatorcontrib><title>Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three key frames of ME videos. We design a dynamic-temporal stream, static-spatial stream, and local-spatial stream module for the TSCNN that respectively attempt to learn and integrate temporal, entire facial region, and facial local region cues in ME videos with the goal of recognizing MEs. In addition, to allow the TSCNN to recognize MEs without using the index values of apex frames, we design a reliable apex frame detection algorithm. Extensive experiments are conducted with five public ME databases: CASME II, SMIC-HS, SAMM, CAS(ME) 2 , and CASME. Our proposed TSCNN is shown to achieve more promising recognition results when compared with many other methods.</description><subject>Algorithms</subject><subject>apex frame location</subject><subject>Apexes</subject><subject>Artificial neural networks</subject><subject>convolutional neural networks</subject><subject>Data mining</subject><subject>Face recognition</subject><subject>Feature extraction</subject><subject>Frame design</subject><subject>Frames (data processing)</subject><subject>Micro-expression recognition</subject><subject>Neural networks</subject><subject>Recognition</subject><subject>spatiotemporal information</subject><subject>Spatiotemporal phenomena</subject><subject>Task analysis</subject><subject>Video</subject><subject>Videos</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUclOwzAQjRBIIOgX9BKJc4r3xEcUlUVikSi9cLEcZ1xSQlzshO3rcRuEmMuMZt57o5mXJFOMZhgjeXZelvPFYkYQljMiBRJE7iVHBAuZUU7F_r_6MJmEsEYxitji-VHy9ADGrbrmu-lW6WLjul534IaQ3jbGu2z-ufEQQuO6dBm2EJ0-PnuAbNF70K9p6bp31w59BOg2vYPB71L_4fzLSXJgdRtg8puPk-XF_LG8ym7uL6_L85vMMFT0GRUG55gUlgtpkQVNCaZcCM2NzQtrmSSMSlFXPE4REKYxRVXNiS44y2lOj5PrUbd2eq02vnnV_ks53ahdw_mV0r5vTAuKGwO1BMrrPC6veFFFZYOtpkYzZkXUOh21Nt69DRB6tXaDj7cFRRjnggpOSETRERVfFIIH-7cVI7X1RI2eqK0n6teTyJqOrAYA_hiFpIJRRn8AJoKIHw</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Song, Baolin</creator><creator>Li, Ke</creator><creator>Zong, Yuan</creator><creator>Zhu, Jie</creator><creator>Zheng, Wenming</creator><creator>Shi, Jingang</creator><creator>Zhao, Li</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-7764-5179</orcidid><orcidid>https://orcid.org/0000-0002-4424-8505</orcidid><orcidid>https://orcid.org/0000-0002-9874-0167</orcidid><orcidid>https://orcid.org/0000-0003-2133-4651</orcidid><orcidid>https://orcid.org/0000-0001-7070-6365</orcidid><orcidid>https://orcid.org/0000-0002-0839-8792</orcidid><orcidid>https://orcid.org/0000-0002-7145-4718</orcidid></search><sort><creationdate>2019</creationdate><title>Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network</title><author>Song, Baolin ; Li, Ke ; Zong, Yuan ; Zhu, Jie ; Zheng, Wenming ; Shi, Jingang ; Zhao, Li</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-36c17128f569f0fea3213566a5cf78ff4924396db5f0f0e24a130bd52a8547373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>apex frame location</topic><topic>Apexes</topic><topic>Artificial neural networks</topic><topic>convolutional neural networks</topic><topic>Data mining</topic><topic>Face recognition</topic><topic>Feature extraction</topic><topic>Frame design</topic><topic>Frames (data processing)</topic><topic>Micro-expression recognition</topic><topic>Neural networks</topic><topic>Recognition</topic><topic>spatiotemporal information</topic><topic>Spatiotemporal phenomena</topic><topic>Task analysis</topic><topic>Video</topic><topic>Videos</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Song, Baolin</creatorcontrib><creatorcontrib>Li, Ke</creatorcontrib><creatorcontrib>Zong, Yuan</creatorcontrib><creatorcontrib>Zhu, Jie</creatorcontrib><creatorcontrib>Zheng, Wenming</creatorcontrib><creatorcontrib>Shi, Jingang</creatorcontrib><creatorcontrib>Zhao, Li</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore (Online service)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Song, Baolin</au><au>Li, Ke</au><au>Zong, Yuan</au><au>Zhu, Jie</au><au>Zheng, Wenming</au><au>Shi, Jingang</au><au>Zhao, Li</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>184537</spage><epage>184551</epage><pages>184537-184551</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three key frames of ME videos. We design a dynamic-temporal stream, static-spatial stream, and local-spatial stream module for the TSCNN that respectively attempt to learn and integrate temporal, entire facial region, and facial local region cues in ME videos with the goal of recognizing MEs. In addition, to allow the TSCNN to recognize MEs without using the index values of apex frames, we design a reliable apex frame detection algorithm. Extensive experiments are conducted with five public ME databases: CASME II, SMIC-HS, SAMM, CAS(ME) 2 , and CASME. Our proposed TSCNN is shown to achieve more promising recognition results when compared with many other methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2960629</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-7764-5179</orcidid><orcidid>https://orcid.org/0000-0002-4424-8505</orcidid><orcidid>https://orcid.org/0000-0002-9874-0167</orcidid><orcidid>https://orcid.org/0000-0003-2133-4651</orcidid><orcidid>https://orcid.org/0000-0001-7070-6365</orcidid><orcidid>https://orcid.org/0000-0002-0839-8792</orcidid><orcidid>https://orcid.org/0000-0002-7145-4718</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2019, Vol.7, p.184537-184551
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2455636522
source DOAJ Directory of Open Access Journals; IEEE Xplore Open Access Journals; EZB Electronic Journals Library
subjects Algorithms
apex frame location
Apexes
Artificial neural networks
convolutional neural networks
Data mining
Face recognition
Feature extraction
Frame design
Frames (data processing)
Micro-expression recognition
Neural networks
Recognition
spatiotemporal information
Spatiotemporal phenomena
Task analysis
Video
Videos
title Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T10%3A58%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recognizing%20Spontaneous%20Micro-Expression%20Using%20a%20Three-Stream%20Convolutional%20Neural%20Network&rft.jtitle=IEEE%20access&rft.au=Song,%20Baolin&rft.date=2019&rft.volume=7&rft.spage=184537&rft.epage=184551&rft.pages=184537-184551&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2960629&rft_dat=%3Cproquest_cross%3E2455636522%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455636522&rft_id=info:pmid/&rft_ieee_id=8936434&rft_doaj_id=oai_doaj_org_article_5cced9e35d7c40b58b396c1fa3ca44f6&rfr_iscdi=true