Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces

A look is worth a thousand words is a popular phrase. And why is a simple look enough to portray our feelings about something or someone? Behind this question are the theoretical foundations of the field of psychology regarding social cognition and the studies of psychologist Paul Ekman. Facial expr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-12
Hauptverfasser: Peres, Vitor Miguel Xavier, Molin, Greice Pinho Dal, Musse, Soraia Raupp
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Peres, Vitor Miguel Xavier
Molin, Greice Pinho Dal
Musse, Soraia Raupp
description A look is worth a thousand words is a popular phrase. And why is a simple look enough to portray our feelings about something or someone? Behind this question are the theoretical foundations of the field of psychology regarding social cognition and the studies of psychologist Paul Ekman. Facial expressions, as a form of non-verbal communication, are the primary way to transmit emotions between human beings. The set of movements and expressions of facial muscles that convey some emotional state of the individual to their observers are targets of studies in many areas. Our research aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces resulting from transferring real faces into Computer Graphics faces. In addition, we also conducted a case study with specific movie characters, such as SheHulk and Genius. We intend to find differences and similarities in facial expressions between real and CG datasets, posed and spontaneous faces, and also to consider the actors' genders in the videos. This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters. Our results indicate that AU intensities are greater for posed than spontaneous datasets, regardless of gender. Furthermore, there is a smoothing of intensity up to 80 percent for AU6 and 45 percent for AU12 when a real face is transformed into CG.
doi_str_mv 10.48550/arxiv.2312.02128
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2312_02128</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2898152858</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-c727504291900dbe35b93d4d5fc4787f5dae8f478523d506dbc24b51472ef82a3</originalsourceid><addsrcrecordid>eNotUMtOwzAQtJCQqEo_gBOWOHAhxVnHjXNCVcVLqsSl92gTO8VVsYPtBHri13Epl5l9aTQ7hFzlbF5IIdg9-m8zzoHnMGeQgzwjE-A8z2QBcEFmIewYY7AoQQg-IT8rtPRL0-iH_SEh2tBpT9MQ2-j8baBbbQdjNX3Hvk8cAo2O4ogRfXigS0uNHXWIZovRuGOX1qPxccD9HfX6iL0LWiVNRUPvbESr3RBoh60Ol-S8w33Qs3-eks3T42b1kq3fnl9Xy3WGAmTWllAKVkCVV4ypRnPRVFwVSnRtUcqyEwq17FIpgCvBFqppoWhEXpSgOwnIp-T6JPuXTt1784H-UB9Tqv9SShc3p4veu88hPVTv3OBt8lSDrGSebAjJfwFThWus</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2898152858</pqid></control><display><type>article</type><title>Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Peres, Vitor Miguel Xavier ; Molin, Greice Pinho Dal ; Musse, Soraia Raupp</creator><creatorcontrib>Peres, Vitor Miguel Xavier ; Molin, Greice Pinho Dal ; Musse, Soraia Raupp</creatorcontrib><description>A look is worth a thousand words is a popular phrase. And why is a simple look enough to portray our feelings about something or someone? Behind this question are the theoretical foundations of the field of psychology regarding social cognition and the studies of psychologist Paul Ekman. Facial expressions, as a form of non-verbal communication, are the primary way to transmit emotions between human beings. The set of movements and expressions of facial muscles that convey some emotional state of the individual to their observers are targets of studies in many areas. Our research aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces resulting from transferring real faces into Computer Graphics faces. In addition, we also conducted a case study with specific movie characters, such as SheHulk and Genius. We intend to find differences and similarities in facial expressions between real and CG datasets, posed and spontaneous faces, and also to consider the actors' genders in the videos. This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters. Our results indicate that AU intensities are greater for posed than spontaneous datasets, regardless of gender. Furthermore, there is a smoothing of intensity up to 80 percent for AU6 and 45 percent for AU12 when a real face is transformed into CG.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2312.02128</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Avatars ; Cognition ; Computer graphics ; Computer Science - Computer Vision and Pattern Recognition ; Datasets ; Emotional factors ; Human beings ; Verbal communication ; Virtual humans</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.02128$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3631085.3631231$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Peres, Vitor Miguel Xavier</creatorcontrib><creatorcontrib>Molin, Greice Pinho Dal</creatorcontrib><creatorcontrib>Musse, Soraia Raupp</creatorcontrib><title>Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces</title><title>arXiv.org</title><description>A look is worth a thousand words is a popular phrase. And why is a simple look enough to portray our feelings about something or someone? Behind this question are the theoretical foundations of the field of psychology regarding social cognition and the studies of psychologist Paul Ekman. Facial expressions, as a form of non-verbal communication, are the primary way to transmit emotions between human beings. The set of movements and expressions of facial muscles that convey some emotional state of the individual to their observers are targets of studies in many areas. Our research aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces resulting from transferring real faces into Computer Graphics faces. In addition, we also conducted a case study with specific movie characters, such as SheHulk and Genius. We intend to find differences and similarities in facial expressions between real and CG datasets, posed and spontaneous faces, and also to consider the actors' genders in the videos. This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters. Our results indicate that AU intensities are greater for posed than spontaneous datasets, regardless of gender. Furthermore, there is a smoothing of intensity up to 80 percent for AU6 and 45 percent for AU12 when a real face is transformed into CG.</description><subject>Avatars</subject><subject>Cognition</subject><subject>Computer graphics</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Datasets</subject><subject>Emotional factors</subject><subject>Human beings</subject><subject>Verbal communication</subject><subject>Virtual humans</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotUMtOwzAQtJCQqEo_gBOWOHAhxVnHjXNCVcVLqsSl92gTO8VVsYPtBHri13Epl5l9aTQ7hFzlbF5IIdg9-m8zzoHnMGeQgzwjE-A8z2QBcEFmIewYY7AoQQg-IT8rtPRL0-iH_SEh2tBpT9MQ2-j8baBbbQdjNX3Hvk8cAo2O4ogRfXigS0uNHXWIZovRuGOX1qPxccD9HfX6iL0LWiVNRUPvbESr3RBoh60Ol-S8w33Qs3-eks3T42b1kq3fnl9Xy3WGAmTWllAKVkCVV4ypRnPRVFwVSnRtUcqyEwq17FIpgCvBFqppoWhEXpSgOwnIp-T6JPuXTt1784H-UB9Tqv9SShc3p4veu88hPVTv3OBt8lSDrGSebAjJfwFThWus</recordid><startdate>20231204</startdate><enddate>20231204</enddate><creator>Peres, Vitor Miguel Xavier</creator><creator>Molin, Greice Pinho Dal</creator><creator>Musse, Soraia Raupp</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231204</creationdate><title>Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces</title><author>Peres, Vitor Miguel Xavier ; Molin, Greice Pinho Dal ; Musse, Soraia Raupp</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-c727504291900dbe35b93d4d5fc4787f5dae8f478523d506dbc24b51472ef82a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Avatars</topic><topic>Cognition</topic><topic>Computer graphics</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Datasets</topic><topic>Emotional factors</topic><topic>Human beings</topic><topic>Verbal communication</topic><topic>Virtual humans</topic><toplevel>online_resources</toplevel><creatorcontrib>Peres, Vitor Miguel Xavier</creatorcontrib><creatorcontrib>Molin, Greice Pinho Dal</creatorcontrib><creatorcontrib>Musse, Soraia Raupp</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peres, Vitor Miguel Xavier</au><au>Molin, Greice Pinho Dal</au><au>Musse, Soraia Raupp</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces</atitle><jtitle>arXiv.org</jtitle><date>2023-12-04</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>A look is worth a thousand words is a popular phrase. And why is a simple look enough to portray our feelings about something or someone? Behind this question are the theoretical foundations of the field of psychology regarding social cognition and the studies of psychologist Paul Ekman. Facial expressions, as a form of non-verbal communication, are the primary way to transmit emotions between human beings. The set of movements and expressions of facial muscles that convey some emotional state of the individual to their observers are targets of studies in many areas. Our research aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces resulting from transferring real faces into Computer Graphics faces. In addition, we also conducted a case study with specific movie characters, such as SheHulk and Genius. We intend to find differences and similarities in facial expressions between real and CG datasets, posed and spontaneous faces, and also to consider the actors' genders in the videos. This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters. Our results indicate that AU intensities are greater for posed than spontaneous datasets, regardless of gender. Furthermore, there is a smoothing of intensity up to 80 percent for AU6 and 45 percent for AU12 when a real face is transformed into CG.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2312.02128</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2312_02128
source arXiv.org; Free E- Journals
subjects Avatars
Cognition
Computer graphics
Computer Science - Computer Vision and Pattern Recognition
Datasets
Emotional factors
Human beings
Verbal communication
Virtual humans
title Can we truly transfer an actor's genuine happiness to avatars? An investigation into virtual, real, posed and spontaneous faces
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T18%3A09%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Can%20we%20truly%20transfer%20an%20actor's%20genuine%20happiness%20to%20avatars?%20An%20investigation%20into%20virtual,%20real,%20posed%20and%20spontaneous%20faces&rft.jtitle=arXiv.org&rft.au=Peres,%20Vitor%20Miguel%20Xavier&rft.date=2023-12-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2312.02128&rft_dat=%3Cproquest_arxiv%3E2898152858%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2898152858&rft_id=info:pmid/&rfr_iscdi=true