Empirical modeling of human face kinematics during speech using motion clustering
In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with a...
Gespeichert in:
Veröffentlicht in: | The Journal of the Acoustical Society of America 2005-07, Vol.118 (1), p.405-409 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 409 |
---|---|
container_issue | 1 |
container_start_page | 405 |
container_title | The Journal of the Acoustical Society of America |
container_volume | 118 |
creator | Lucero, Jorge C. Maciel, Susanne T. R. Johns, Derek A. Munhall, Kevin G. |
description | In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with an associated weight. Motion of the secondary markers is computed as the weighted sum of the primary markers of the clusters to which they belong. This model may be used to produce facial animations, by driving the primary markers with measured kinematic signals. |
doi_str_mv | 10.1121/1.1928807 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_85644736</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>85644736</sourcerecordid><originalsourceid>FETCH-LOGICAL-c399t-83268a024ff10e3e27243566daebd32f5ce6f3ac81b63af2817720f75fdbb0843</originalsourceid><addsrcrecordid>eNqFkUtLxDAQx4Mo7rp68AtILwoeqpm8ml4EWdYHLIig55CmiRvtY23ag9_eltbHRWQOwzA__gO_QegY8AUAgUu4gJRIiZMdNAdOcCw5YbtojjGGmKVCzNBBCK_9yCVN99EMBEBKBczR46rc-sYbXURlndvCVy9R7aJNV-oqctrY6M1XttStNyHKu2bYh621ZhN1YRjKuvV1FZmiC60d1odoz-ki2KOpL9DzzeppeRevH27vl9fr2NA0bWNJiZAaE-YcYEstSQijXIhc2yynxHFjhaPaSMgE1Y5ISBKCXcJdnmVYMrpAZ2PutqnfOxtaVfpgbFHoytZdUJILxhIq_gWF5ECSlPfg-Qiapg6hsU5tG1_q5kMBVoNo1dcoumdPptAuK23-Q05me-B0AnTo7bpGV8aHX1x_kMEQdDVywfhWDy7_vvr9K_X1K1XTT8Bqmj0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>68512795</pqid></control><display><type>article</type><title>Empirical modeling of human face kinematics during speech using motion clustering</title><source>MEDLINE</source><source>American Institute of Physics (AIP) Journals</source><source>AIP Acoustical Society of America</source><creator>Lucero, Jorge C. ; Maciel, Susanne T. R. ; Johns, Derek A. ; Munhall, Kevin G.</creator><creatorcontrib>Lucero, Jorge C. ; Maciel, Susanne T. R. ; Johns, Derek A. ; Munhall, Kevin G.</creatorcontrib><description>In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with an associated weight. Motion of the secondary markers is computed as the weighted sum of the primary markers of the clusters to which they belong. This model may be used to produce facial animations, by driving the primary markers with measured kinematic signals.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/1.1928807</identifier><identifier>PMID: 16119361</identifier><identifier>CODEN: JASMAN</identifier><language>eng</language><publisher>Woodbury, NY: Acoustical Society of America</publisher><subject>Acoustics ; Algorithms ; Biological and medical sciences ; Biomechanical Phenomena ; Cluster Analysis ; Ear and associated structures. Auditory pathways and centers. Hearing. Vocal organ. Phonation. Sound production. Echolocation ; Exact sciences and technology ; Facial Muscles - physiology ; Fundamental and applied biological sciences. Psychology ; Fundamental areas of phenomenology (including applications) ; Humans ; Models, Biological ; Motion ; Movement ; Physics ; Speech - physiology ; Vertebrates: nervous system and sense organs</subject><ispartof>The Journal of the Acoustical Society of America, 2005-07, Vol.118 (1), p.405-409</ispartof><rights>2005 Acoustical Society of America</rights><rights>2005 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c399t-83268a024ff10e3e27243566daebd32f5ce6f3ac81b63af2817720f75fdbb0843</citedby><cites>FETCH-LOGICAL-c399t-83268a024ff10e3e27243566daebd32f5ce6f3ac81b63af2817720f75fdbb0843</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/jasa/article-lookup/doi/10.1121/1.1928807$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>207,208,314,780,784,794,1565,4512,27924,27925,76384</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=16953417$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/16119361$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Lucero, Jorge C.</creatorcontrib><creatorcontrib>Maciel, Susanne T. R.</creatorcontrib><creatorcontrib>Johns, Derek A.</creatorcontrib><creatorcontrib>Munhall, Kevin G.</creatorcontrib><title>Empirical modeling of human face kinematics during speech using motion clustering</title><title>The Journal of the Acoustical Society of America</title><addtitle>J Acoust Soc Am</addtitle><description>In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with an associated weight. Motion of the secondary markers is computed as the weighted sum of the primary markers of the clusters to which they belong. This model may be used to produce facial animations, by driving the primary markers with measured kinematic signals.</description><subject>Acoustics</subject><subject>Algorithms</subject><subject>Biological and medical sciences</subject><subject>Biomechanical Phenomena</subject><subject>Cluster Analysis</subject><subject>Ear and associated structures. Auditory pathways and centers. Hearing. Vocal organ. Phonation. Sound production. Echolocation</subject><subject>Exact sciences and technology</subject><subject>Facial Muscles - physiology</subject><subject>Fundamental and applied biological sciences. Psychology</subject><subject>Fundamental areas of phenomenology (including applications)</subject><subject>Humans</subject><subject>Models, Biological</subject><subject>Motion</subject><subject>Movement</subject><subject>Physics</subject><subject>Speech - physiology</subject><subject>Vertebrates: nervous system and sense organs</subject><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2005</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkUtLxDAQx4Mo7rp68AtILwoeqpm8ml4EWdYHLIig55CmiRvtY23ag9_eltbHRWQOwzA__gO_QegY8AUAgUu4gJRIiZMdNAdOcCw5YbtojjGGmKVCzNBBCK_9yCVN99EMBEBKBczR46rc-sYbXURlndvCVy9R7aJNV-oqctrY6M1XttStNyHKu2bYh621ZhN1YRjKuvV1FZmiC60d1odoz-ki2KOpL9DzzeppeRevH27vl9fr2NA0bWNJiZAaE-YcYEstSQijXIhc2yynxHFjhaPaSMgE1Y5ISBKCXcJdnmVYMrpAZ2PutqnfOxtaVfpgbFHoytZdUJILxhIq_gWF5ECSlPfg-Qiapg6hsU5tG1_q5kMBVoNo1dcoumdPptAuK23-Q05me-B0AnTo7bpGV8aHX1x_kMEQdDVywfhWDy7_vvr9K_X1K1XTT8Bqmj0</recordid><startdate>20050701</startdate><enddate>20050701</enddate><creator>Lucero, Jorge C.</creator><creator>Maciel, Susanne T. R.</creator><creator>Johns, Derek A.</creator><creator>Munhall, Kevin G.</creator><general>Acoustical Society of America</general><general>American Institute of Physics</general><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>8BM</scope><scope>7T9</scope></search><sort><creationdate>20050701</creationdate><title>Empirical modeling of human face kinematics during speech using motion clustering</title><author>Lucero, Jorge C. ; Maciel, Susanne T. R. ; Johns, Derek A. ; Munhall, Kevin G.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c399t-83268a024ff10e3e27243566daebd32f5ce6f3ac81b63af2817720f75fdbb0843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Acoustics</topic><topic>Algorithms</topic><topic>Biological and medical sciences</topic><topic>Biomechanical Phenomena</topic><topic>Cluster Analysis</topic><topic>Ear and associated structures. Auditory pathways and centers. Hearing. Vocal organ. Phonation. Sound production. Echolocation</topic><topic>Exact sciences and technology</topic><topic>Facial Muscles - physiology</topic><topic>Fundamental and applied biological sciences. Psychology</topic><topic>Fundamental areas of phenomenology (including applications)</topic><topic>Humans</topic><topic>Models, Biological</topic><topic>Motion</topic><topic>Movement</topic><topic>Physics</topic><topic>Speech - physiology</topic><topic>Vertebrates: nervous system and sense organs</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lucero, Jorge C.</creatorcontrib><creatorcontrib>Maciel, Susanne T. R.</creatorcontrib><creatorcontrib>Johns, Derek A.</creatorcontrib><creatorcontrib>Munhall, Kevin G.</creatorcontrib><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>ComDisDome</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lucero, Jorge C.</au><au>Maciel, Susanne T. R.</au><au>Johns, Derek A.</au><au>Munhall, Kevin G.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Empirical modeling of human face kinematics during speech using motion clustering</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><addtitle>J Acoust Soc Am</addtitle><date>2005-07-01</date><risdate>2005</risdate><volume>118</volume><issue>1</issue><spage>405</spage><epage>409</epage><pages>405-409</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><coden>JASMAN</coden><abstract>In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with an associated weight. Motion of the secondary markers is computed as the weighted sum of the primary markers of the clusters to which they belong. This model may be used to produce facial animations, by driving the primary markers with measured kinematic signals.</abstract><cop>Woodbury, NY</cop><pub>Acoustical Society of America</pub><pmid>16119361</pmid><doi>10.1121/1.1928807</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0001-4966 |
ispartof | The Journal of the Acoustical Society of America, 2005-07, Vol.118 (1), p.405-409 |
issn | 0001-4966 1520-8524 |
language | eng |
recordid | cdi_proquest_miscellaneous_85644736 |
source | MEDLINE; American Institute of Physics (AIP) Journals; AIP Acoustical Society of America |
subjects | Acoustics Algorithms Biological and medical sciences Biomechanical Phenomena Cluster Analysis Ear and associated structures. Auditory pathways and centers. Hearing. Vocal organ. Phonation. Sound production. Echolocation Exact sciences and technology Facial Muscles - physiology Fundamental and applied biological sciences. Psychology Fundamental areas of phenomenology (including applications) Humans Models, Biological Motion Movement Physics Speech - physiology Vertebrates: nervous system and sense organs |
title | Empirical modeling of human face kinematics during speech using motion clustering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T08%3A15%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Empirical%20modeling%20of%20human%20face%20kinematics%20during%20speech%20using%20motion%20clustering&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=Lucero,%20Jorge%20C.&rft.date=2005-07-01&rft.volume=118&rft.issue=1&rft.spage=405&rft.epage=409&rft.pages=405-409&rft.issn=0001-4966&rft.eissn=1520-8524&rft.coden=JASMAN&rft_id=info:doi/10.1121/1.1928807&rft_dat=%3Cproquest_cross%3E85644736%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=68512795&rft_id=info:pmid/16119361&rfr_iscdi=true |