Expressive Talking Avatars

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geometric markers and features modeled for human faces,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics 2024-05, Vol.PP (5), p.1-11
Hauptverfasser: Pan, Ye, Tan, Shuai, Cheng, Shengran, Lin, Qunfen, Zeng, Zijiao, Mitchell, Kenny
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 11
container_issue 5
container_start_page 1
container_title IEEE transactions on visualization and computer graphics
container_volume PP
creator Pan, Ye
Tan, Shuai
Cheng, Shengran
Lin, Qunfen
Zeng, Zijiao
Mitchell, Kenny
description Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geometric markers and features modeled for human faces, not stylized avatar faces. To cope with the challenge of emotional and expressive generating talking avatars, we build the Emotional Talking Avatar Dataset which is a talking-face video corpus featuring 6 different stylized characters talking with 7 different emotions. Together with the dataset, we also release an emotional talking avatar generation method which enables the manipulation of emotion. We validated the effectiveness of our dataset and our method in generating audio based puppetry examples, including comparisons to state-of-the-art techniques and a user study. Finally, various applications of this method are discussed in the context of animating avatars in VR
doi_str_mv 10.1109/TVCG.2024.3372047
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_38437076</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10458318</ieee_id><sourcerecordid>2937701659</sourcerecordid><originalsourceid>FETCH-LOGICAL-c350t-3dbe52e9e1b0a8cc0e791ac6ddda9f9b967abf9b4df47e3052d99332a5e537dc3</originalsourceid><addsrcrecordid>eNpdkE1Lw0AURQdRbK3-AEWk4MZN6pvvzLKEWoWCm-p2mGReJDVt6kxT9N-b0iri6r7FuZfHIeSSwohSMPfz12w6YsDEiHPNQOgj0qdG0AQkqOPuBq0TppjqkbMYFwBUiNSckh5PBdegVZ9cTT7XAWOstjicu_q9Wr0Nx1u3cSGek5PS1REvDjkgLw-TefaYzJ6nT9l4lhRcwibhPkfJ0CDNwaVFAagNdYXy3jtTmtwo7fIuhS-FRg6SeWM4Z06i5NoXfEDu9rvr0Hy0GDd2WcUC69qtsGmjZYZrDVRJ06G3_9BF04ZV953lIKgwRhnoKLqnitDEGLC061AtXfiyFOxOnN2Jsztx9iCu69wcltt8if638WOqA673QIWIfwaFTDlN-TeLI3AM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3041499690</pqid></control><display><type>article</type><title>Expressive Talking Avatars</title><source>IEEE Electronic Library (IEL)</source><creator>Pan, Ye ; Tan, Shuai ; Cheng, Shengran ; Lin, Qunfen ; Zeng, Zijiao ; Mitchell, Kenny</creator><creatorcontrib>Pan, Ye ; Tan, Shuai ; Cheng, Shengran ; Lin, Qunfen ; Zeng, Zijiao ; Mitchell, Kenny</creatorcontrib><description>Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geometric markers and features modeled for human faces, not stylized avatar faces. To cope with the challenge of emotional and expressive generating talking avatars, we build the Emotional Talking Avatar Dataset which is a talking-face video corpus featuring 6 different stylized characters talking with 7 different emotions. Together with the dataset, we also release an emotional talking avatar generation method which enables the manipulation of emotion. We validated the effectiveness of our dataset and our method in generating audio based puppetry examples, including comparisons to state-of-the-art techniques and a user study. Finally, various applications of this method are discussed in the context of animating avatars in VR</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2024.3372047</identifier><identifier>PMID: 38437076</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Avatars ; Datasets ; Electronic mail ; Emotions ; Faces ; Feature extraction ; Human-centered computing—Computer graphics—Graphics systems and interfaces—Virtual reality ; Human-centered computing—Human computer interaction (HCI)—HCI design and evaluation methods—User studies ; Lips ; Synchronization ; Talking ; Three-dimensional displays ; Virtual reality</subject><ispartof>IEEE transactions on visualization and computer graphics, 2024-05, Vol.PP (5), p.1-11</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c350t-3dbe52e9e1b0a8cc0e791ac6ddda9f9b967abf9b4df47e3052d99332a5e537dc3</citedby><cites>FETCH-LOGICAL-c350t-3dbe52e9e1b0a8cc0e791ac6ddda9f9b967abf9b4df47e3052d99332a5e537dc3</cites><orcidid>0000-0003-3322-5161 ; 0000-0002-1447-6806 ; 0000-0003-2420-7447</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10458318$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10458318$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38437076$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Pan, Ye</creatorcontrib><creatorcontrib>Tan, Shuai</creatorcontrib><creatorcontrib>Cheng, Shengran</creatorcontrib><creatorcontrib>Lin, Qunfen</creatorcontrib><creatorcontrib>Zeng, Zijiao</creatorcontrib><creatorcontrib>Mitchell, Kenny</creatorcontrib><title>Expressive Talking Avatars</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geometric markers and features modeled for human faces, not stylized avatar faces. To cope with the challenge of emotional and expressive generating talking avatars, we build the Emotional Talking Avatar Dataset which is a talking-face video corpus featuring 6 different stylized characters talking with 7 different emotions. Together with the dataset, we also release an emotional talking avatar generation method which enables the manipulation of emotion. We validated the effectiveness of our dataset and our method in generating audio based puppetry examples, including comparisons to state-of-the-art techniques and a user study. Finally, various applications of this method are discussed in the context of animating avatars in VR</description><subject>Avatars</subject><subject>Datasets</subject><subject>Electronic mail</subject><subject>Emotions</subject><subject>Faces</subject><subject>Feature extraction</subject><subject>Human-centered computing—Computer graphics—Graphics systems and interfaces—Virtual reality</subject><subject>Human-centered computing—Human computer interaction (HCI)—HCI design and evaluation methods—User studies</subject><subject>Lips</subject><subject>Synchronization</subject><subject>Talking</subject><subject>Three-dimensional displays</subject><subject>Virtual reality</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1Lw0AURQdRbK3-AEWk4MZN6pvvzLKEWoWCm-p2mGReJDVt6kxT9N-b0iri6r7FuZfHIeSSwohSMPfz12w6YsDEiHPNQOgj0qdG0AQkqOPuBq0TppjqkbMYFwBUiNSckh5PBdegVZ9cTT7XAWOstjicu_q9Wr0Nx1u3cSGek5PS1REvDjkgLw-TefaYzJ6nT9l4lhRcwibhPkfJ0CDNwaVFAagNdYXy3jtTmtwo7fIuhS-FRg6SeWM4Z06i5NoXfEDu9rvr0Hy0GDd2WcUC69qtsGmjZYZrDVRJ06G3_9BF04ZV953lIKgwRhnoKLqnitDEGLC061AtXfiyFOxOnN2Jsztx9iCu69wcltt8if638WOqA673QIWIfwaFTDlN-TeLI3AM</recordid><startdate>20240501</startdate><enddate>20240501</enddate><creator>Pan, Ye</creator><creator>Tan, Shuai</creator><creator>Cheng, Shengran</creator><creator>Lin, Qunfen</creator><creator>Zeng, Zijiao</creator><creator>Mitchell, Kenny</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3322-5161</orcidid><orcidid>https://orcid.org/0000-0002-1447-6806</orcidid><orcidid>https://orcid.org/0000-0003-2420-7447</orcidid></search><sort><creationdate>20240501</creationdate><title>Expressive Talking Avatars</title><author>Pan, Ye ; Tan, Shuai ; Cheng, Shengran ; Lin, Qunfen ; Zeng, Zijiao ; Mitchell, Kenny</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c350t-3dbe52e9e1b0a8cc0e791ac6ddda9f9b967abf9b4df47e3052d99332a5e537dc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Avatars</topic><topic>Datasets</topic><topic>Electronic mail</topic><topic>Emotions</topic><topic>Faces</topic><topic>Feature extraction</topic><topic>Human-centered computing—Computer graphics—Graphics systems and interfaces—Virtual reality</topic><topic>Human-centered computing—Human computer interaction (HCI)—HCI design and evaluation methods—User studies</topic><topic>Lips</topic><topic>Synchronization</topic><topic>Talking</topic><topic>Three-dimensional displays</topic><topic>Virtual reality</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pan, Ye</creatorcontrib><creatorcontrib>Tan, Shuai</creatorcontrib><creatorcontrib>Cheng, Shengran</creatorcontrib><creatorcontrib>Lin, Qunfen</creatorcontrib><creatorcontrib>Zeng, Zijiao</creatorcontrib><creatorcontrib>Mitchell, Kenny</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pan, Ye</au><au>Tan, Shuai</au><au>Cheng, Shengran</au><au>Lin, Qunfen</au><au>Zeng, Zijiao</au><au>Mitchell, Kenny</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Expressive Talking Avatars</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2024-05-01</date><risdate>2024</risdate><volume>PP</volume><issue>5</issue><spage>1</spage><epage>11</epage><pages>1-11</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geometric markers and features modeled for human faces, not stylized avatar faces. To cope with the challenge of emotional and expressive generating talking avatars, we build the Emotional Talking Avatar Dataset which is a talking-face video corpus featuring 6 different stylized characters talking with 7 different emotions. Together with the dataset, we also release an emotional talking avatar generation method which enables the manipulation of emotion. We validated the effectiveness of our dataset and our method in generating audio based puppetry examples, including comparisons to state-of-the-art techniques and a user study. Finally, various applications of this method are discussed in the context of animating avatars in VR</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38437076</pmid><doi>10.1109/TVCG.2024.3372047</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-3322-5161</orcidid><orcidid>https://orcid.org/0000-0002-1447-6806</orcidid><orcidid>https://orcid.org/0000-0003-2420-7447</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1077-2626
ispartof IEEE transactions on visualization and computer graphics, 2024-05, Vol.PP (5), p.1-11
issn 1077-2626
1941-0506
language eng
recordid cdi_pubmed_primary_38437076
source IEEE Electronic Library (IEL)
subjects Avatars
Datasets
Electronic mail
Emotions
Faces
Feature extraction
Human-centered computing—Computer graphics—Graphics systems and interfaces—Virtual reality
Human-centered computing—Human computer interaction (HCI)—HCI design and evaluation methods—User studies
Lips
Synchronization
Talking
Three-dimensional displays
Virtual reality
title Expressive Talking Avatars
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T08%3A00%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Expressive%20Talking%20Avatars&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Pan,%20Ye&rft.date=2024-05-01&rft.volume=PP&rft.issue=5&rft.spage=1&rft.epage=11&rft.pages=1-11&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2024.3372047&rft_dat=%3Cproquest_RIE%3E2937701659%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3041499690&rft_id=info:pmid/38437076&rft_ieee_id=10458318&rfr_iscdi=true