HMM-based generation of laughter facial expression
[Display omitted] This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-proce...
Gespeichert in:
Veröffentlicht in: | Speech communication 2018-04, Vol.98, p.28-41 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 41 |
---|---|
container_issue | |
container_start_page | 28 |
container_title | Speech communication |
container_volume | 98 |
creator | Çakmak, Hüseyin Dutoit, Thierry |
description | [Display omitted]
This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-processing step on the generated eyelid trajectories.
The models are trained on a database of facial expressions recorded on one male subject watching humorous videos. A commercially available marker-based motion capture system was used to record the visual data. A preliminary study has shown that modeling head motion with the same transcriptions as for facial deformation is not the best choice due to the rigidness of the resulting head motion.
Finally, the generated facial laughter trajectories are used to animate a 3D face model and the corresponding animation is rendered in a video. An online perception MOS test is conducted to assess the improvement compared to the previous method and to compare with the perception of ground truth trajectories. Results show that the new approach significantly outperforms the previous one. |
doi_str_mv | 10.1016/j.specom.2017.12.006 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2062628678</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167639317300110</els_id><sourcerecordid>2062628678</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-f404987d5e3073d7a2b812f53a6106c7fb0c93483ff4152c6e4eb91b47dc66af3</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EEqXwDxgiMSf4q7azIKEKWqRWLDBbjnMujto42CmCf4-rMDPdcM_7nu5B6JbgimAi7rsqDWDDoaKYyIrQCmNxhmZESVpKoug5mmVMloLV7BJdpdRhjLlSdIboerstG5OgLXbQQzSjD30RXLE3x93HCLFwxnqzL-B7iJBS3l6jC2f2CW7-5hy9Pz-9Ldfl5nX1snzclJYxPpaOY14r2S6AYclaaWijCHULZgTBwkrXYFszrphznCyoFcChqUnDZWuFMI7N0d3UO8TweYQ06i4cY59PaooFFVQJqTLFJ8rGkFIEp4foDyb-aIL1yY7u9GRHn-xoQnW2k2MPUwzyB18eok7WQ2-h9RHsqNvg_y_4Bce1bj8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2062628678</pqid></control><display><type>article</type><title>HMM-based generation of laughter facial expression</title><source>Elsevier ScienceDirect Journals Complete</source><creator>Çakmak, Hüseyin ; Dutoit, Thierry</creator><creatorcontrib>Çakmak, Hüseyin ; Dutoit, Thierry</creatorcontrib><description>[Display omitted]
This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-processing step on the generated eyelid trajectories.
The models are trained on a database of facial expressions recorded on one male subject watching humorous videos. A commercially available marker-based motion capture system was used to record the visual data. A preliminary study has shown that modeling head motion with the same transcriptions as for facial deformation is not the best choice due to the rigidness of the resulting head motion.
Finally, the generated facial laughter trajectories are used to animate a 3D face model and the corresponding animation is rendered in a video. An online perception MOS test is conducted to assess the improvement compared to the previous method and to compare with the perception of ground truth trajectories. Results show that the new approach significantly outperforms the previous one.</description><identifier>ISSN: 0167-6393</identifier><identifier>EISSN: 1872-7182</identifier><identifier>DOI: 10.1016/j.specom.2017.12.006</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Animation ; Deformation ; Emotions ; Face (Body) ; Facial expression ; Facial expressions ; Generation ; Ground truth ; Head movement ; Humor ; Laughter ; Markov analysis ; Markov chains ; Motion ; Motion capture ; Motion perception ; Post-production processing ; Three dimensional models ; Trajectories ; Truth ; Visual ; Visual perception</subject><ispartof>Speech communication, 2018-04, Vol.98, p.28-41</ispartof><rights>2017</rights><rights>Copyright Elsevier Science Ltd. Apr 2018</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-f404987d5e3073d7a2b812f53a6106c7fb0c93483ff4152c6e4eb91b47dc66af3</citedby><cites>FETCH-LOGICAL-c334t-f404987d5e3073d7a2b812f53a6106c7fb0c93483ff4152c6e4eb91b47dc66af3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.specom.2017.12.006$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids></links><search><creatorcontrib>Çakmak, Hüseyin</creatorcontrib><creatorcontrib>Dutoit, Thierry</creatorcontrib><title>HMM-based generation of laughter facial expression</title><title>Speech communication</title><description>[Display omitted]
This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-processing step on the generated eyelid trajectories.
The models are trained on a database of facial expressions recorded on one male subject watching humorous videos. A commercially available marker-based motion capture system was used to record the visual data. A preliminary study has shown that modeling head motion with the same transcriptions as for facial deformation is not the best choice due to the rigidness of the resulting head motion.
Finally, the generated facial laughter trajectories are used to animate a 3D face model and the corresponding animation is rendered in a video. An online perception MOS test is conducted to assess the improvement compared to the previous method and to compare with the perception of ground truth trajectories. Results show that the new approach significantly outperforms the previous one.</description><subject>Animation</subject><subject>Deformation</subject><subject>Emotions</subject><subject>Face (Body)</subject><subject>Facial expression</subject><subject>Facial expressions</subject><subject>Generation</subject><subject>Ground truth</subject><subject>Head movement</subject><subject>Humor</subject><subject>Laughter</subject><subject>Markov analysis</subject><subject>Markov chains</subject><subject>Motion</subject><subject>Motion capture</subject><subject>Motion perception</subject><subject>Post-production processing</subject><subject>Three dimensional models</subject><subject>Trajectories</subject><subject>Truth</subject><subject>Visual</subject><subject>Visual perception</subject><issn>0167-6393</issn><issn>1872-7182</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAQhi0EEqXwDxgiMSf4q7azIKEKWqRWLDBbjnMujto42CmCf4-rMDPdcM_7nu5B6JbgimAi7rsqDWDDoaKYyIrQCmNxhmZESVpKoug5mmVMloLV7BJdpdRhjLlSdIboerstG5OgLXbQQzSjD30RXLE3x93HCLFwxnqzL-B7iJBS3l6jC2f2CW7-5hy9Pz-9Ldfl5nX1snzclJYxPpaOY14r2S6AYclaaWijCHULZgTBwkrXYFszrphznCyoFcChqUnDZWuFMI7N0d3UO8TweYQ06i4cY59PaooFFVQJqTLFJ8rGkFIEp4foDyb-aIL1yY7u9GRHn-xoQnW2k2MPUwzyB18eok7WQ2-h9RHsqNvg_y_4Bce1bj8</recordid><startdate>201804</startdate><enddate>201804</enddate><creator>Çakmak, Hüseyin</creator><creator>Dutoit, Thierry</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7T9</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201804</creationdate><title>HMM-based generation of laughter facial expression</title><author>Çakmak, Hüseyin ; Dutoit, Thierry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-f404987d5e3073d7a2b812f53a6106c7fb0c93483ff4152c6e4eb91b47dc66af3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Animation</topic><topic>Deformation</topic><topic>Emotions</topic><topic>Face (Body)</topic><topic>Facial expression</topic><topic>Facial expressions</topic><topic>Generation</topic><topic>Ground truth</topic><topic>Head movement</topic><topic>Humor</topic><topic>Laughter</topic><topic>Markov analysis</topic><topic>Markov chains</topic><topic>Motion</topic><topic>Motion capture</topic><topic>Motion perception</topic><topic>Post-production processing</topic><topic>Three dimensional models</topic><topic>Trajectories</topic><topic>Truth</topic><topic>Visual</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Çakmak, Hüseyin</creatorcontrib><creatorcontrib>Dutoit, Thierry</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Speech communication</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Çakmak, Hüseyin</au><au>Dutoit, Thierry</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>HMM-based generation of laughter facial expression</atitle><jtitle>Speech communication</jtitle><date>2018-04</date><risdate>2018</risdate><volume>98</volume><spage>28</spage><epage>41</epage><pages>28-41</pages><issn>0167-6393</issn><eissn>1872-7182</eissn><abstract>[Display omitted]
This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-processing step on the generated eyelid trajectories.
The models are trained on a database of facial expressions recorded on one male subject watching humorous videos. A commercially available marker-based motion capture system was used to record the visual data. A preliminary study has shown that modeling head motion with the same transcriptions as for facial deformation is not the best choice due to the rigidness of the resulting head motion.
Finally, the generated facial laughter trajectories are used to animate a 3D face model and the corresponding animation is rendered in a video. An online perception MOS test is conducted to assess the improvement compared to the previous method and to compare with the perception of ground truth trajectories. Results show that the new approach significantly outperforms the previous one.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.specom.2017.12.006</doi><tpages>14</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0167-6393 |
ispartof | Speech communication, 2018-04, Vol.98, p.28-41 |
issn | 0167-6393 1872-7182 |
language | eng |
recordid | cdi_proquest_journals_2062628678 |
source | Elsevier ScienceDirect Journals Complete |
subjects | Animation Deformation Emotions Face (Body) Facial expression Facial expressions Generation Ground truth Head movement Humor Laughter Markov analysis Markov chains Motion Motion capture Motion perception Post-production processing Three dimensional models Trajectories Truth Visual Visual perception |
title | HMM-based generation of laughter facial expression |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T10%3A42%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=HMM-based%20generation%20of%20laughter%20facial%20expression&rft.jtitle=Speech%20communication&rft.au=%C3%87akmak,%20H%C3%BCseyin&rft.date=2018-04&rft.volume=98&rft.spage=28&rft.epage=41&rft.pages=28-41&rft.issn=0167-6393&rft.eissn=1872-7182&rft_id=info:doi/10.1016/j.specom.2017.12.006&rft_dat=%3Cproquest_cross%3E2062628678%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2062628678&rft_id=info:pmid/&rft_els_id=S0167639317300110&rfr_iscdi=true |