SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array
Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devi...
Gespeichert in:
Veröffentlicht in: | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies mobile, wearable and ubiquitous technologies, 2021-12, Vol.5 (4), p.1-33, Article 156 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 33 |
---|---|
container_issue | 4 |
container_start_page | 1 |
container_title | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies |
container_volume | 5 |
creator | Gao, Yang Jin, Yincheng Choi, Seokmin Li, Jiyang Pan, Junjie Shu, Lin Zhou, Chi Jin, Zhanpeng |
description | Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system. |
doi_str_mv | 10.1145/3494988 |
format | Article |
fullrecord | <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3494988</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3494988</sourcerecordid><originalsourceid>FETCH-LOGICAL-a206t-23646b4176923254e9670dc45dd8aee243e09ba0cdb01489ae47c1a1857164d93</originalsourceid><addsrcrecordid>eNpNj8tqAkEQRRuJ4KCSfX7A1SRV3dWPWorERBBcqOuhprsFQ3wwnU3-XoMmZHUv3MOFo9QjwjMi2RdDTBxCT1WaPNVsnX_41wdqXMoHACAbE8BXqlqfjvs4l5hHqr-Tz5LH9xyq7fx1M3uvl6u3xWy6rEWD-6q1ceRaQu9YG20ps_OQItmUguSsyWTgViCmFpACSyYfUTBYj44Sm6Ga3H5jdyqly7vm3O0P0n03CM2PRHOXuJJPN1Li4Q_6HS_m7z2V</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array</title><source>ACM Digital Library</source><creator>Gao, Yang ; Jin, Yincheng ; Choi, Seokmin ; Li, Jiyang ; Pan, Junjie ; Shu, Lin ; Zhou, Chi ; Jin, Zhanpeng</creator><creatorcontrib>Gao, Yang ; Jin, Yincheng ; Choi, Seokmin ; Li, Jiyang ; Pan, Junjie ; Shu, Lin ; Zhou, Chi ; Jin, Zhanpeng</creatorcontrib><description>Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.</description><identifier>ISSN: 2474-9567</identifier><identifier>EISSN: 2474-9567</identifier><identifier>DOI: 10.1145/3494988</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Human computer interaction (HCI) ; Human-centered computing ; Ubiquitous and mobile computing ; Ubiquitous and mobile computing systems and tools</subject><ispartof>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2021-12, Vol.5 (4), p.1-33, Article 156</ispartof><rights>ACM</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-a206t-23646b4176923254e9670dc45dd8aee243e09ba0cdb01489ae47c1a1857164d93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3494988$$EPDF$$P50$$Gacm$$H</linktopdf><link.rule.ids>314,780,784,2282,27924,27925,40196,76228</link.rule.ids></links><search><creatorcontrib>Gao, Yang</creatorcontrib><creatorcontrib>Jin, Yincheng</creatorcontrib><creatorcontrib>Choi, Seokmin</creatorcontrib><creatorcontrib>Li, Jiyang</creatorcontrib><creatorcontrib>Pan, Junjie</creatorcontrib><creatorcontrib>Shu, Lin</creatorcontrib><creatorcontrib>Zhou, Chi</creatorcontrib><creatorcontrib>Jin, Zhanpeng</creatorcontrib><title>SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array</title><title>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</title><addtitle>ACM IMWUT</addtitle><description>Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.</description><subject>Human computer interaction (HCI)</subject><subject>Human-centered computing</subject><subject>Ubiquitous and mobile computing</subject><subject>Ubiquitous and mobile computing systems and tools</subject><issn>2474-9567</issn><issn>2474-9567</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpNj8tqAkEQRRuJ4KCSfX7A1SRV3dWPWorERBBcqOuhprsFQ3wwnU3-XoMmZHUv3MOFo9QjwjMi2RdDTBxCT1WaPNVsnX_41wdqXMoHACAbE8BXqlqfjvs4l5hHqr-Tz5LH9xyq7fx1M3uvl6u3xWy6rEWD-6q1ceRaQu9YG20ps_OQItmUguSsyWTgViCmFpACSyYfUTBYj44Sm6Ga3H5jdyqly7vm3O0P0n03CM2PRHOXuJJPN1Li4Q_6HS_m7z2V</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Gao, Yang</creator><creator>Jin, Yincheng</creator><creator>Choi, Seokmin</creator><creator>Li, Jiyang</creator><creator>Pan, Junjie</creator><creator>Shu, Lin</creator><creator>Zhou, Chi</creator><creator>Jin, Zhanpeng</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20211201</creationdate><title>SonicFace</title><author>Gao, Yang ; Jin, Yincheng ; Choi, Seokmin ; Li, Jiyang ; Pan, Junjie ; Shu, Lin ; Zhou, Chi ; Jin, Zhanpeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a206t-23646b4176923254e9670dc45dd8aee243e09ba0cdb01489ae47c1a1857164d93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Human computer interaction (HCI)</topic><topic>Human-centered computing</topic><topic>Ubiquitous and mobile computing</topic><topic>Ubiquitous and mobile computing systems and tools</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Yang</creatorcontrib><creatorcontrib>Jin, Yincheng</creatorcontrib><creatorcontrib>Choi, Seokmin</creatorcontrib><creatorcontrib>Li, Jiyang</creatorcontrib><creatorcontrib>Pan, Junjie</creatorcontrib><creatorcontrib>Shu, Lin</creatorcontrib><creatorcontrib>Zhou, Chi</creatorcontrib><creatorcontrib>Jin, Zhanpeng</creatorcontrib><collection>CrossRef</collection><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Yang</au><au>Jin, Yincheng</au><au>Choi, Seokmin</au><au>Li, Jiyang</au><au>Pan, Junjie</au><au>Shu, Lin</au><au>Zhou, Chi</au><au>Jin, Zhanpeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array</atitle><jtitle>Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies</jtitle><stitle>ACM IMWUT</stitle><date>2021-12-01</date><risdate>2021</risdate><volume>5</volume><issue>4</issue><spage>1</spage><epage>33</epage><pages>1-33</pages><artnum>156</artnum><issn>2474-9567</issn><eissn>2474-9567</eissn><abstract>Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3494988</doi><tpages>33</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2474-9567 |
ispartof | Proceedings of ACM on interactive, mobile, wearable and ubiquitous technologies, 2021-12, Vol.5 (4), p.1-33, Article 156 |
issn | 2474-9567 2474-9567 |
language | eng |
recordid | cdi_crossref_primary_10_1145_3494988 |
source | ACM Digital Library |
subjects | Human computer interaction (HCI) Human-centered computing Ubiquitous and mobile computing Ubiquitous and mobile computing systems and tools |
title | SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T07%3A38%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SonicFace:%20Tracking%20Facial%20Expressions%20Using%20a%20Commodity%20Microphone%20Array&rft.jtitle=Proceedings%20of%20ACM%20on%20interactive,%20mobile,%20wearable%20and%20ubiquitous%20technologies&rft.au=Gao,%20Yang&rft.date=2021-12-01&rft.volume=5&rft.issue=4&rft.spage=1&rft.epage=33&rft.pages=1-33&rft.artnum=156&rft.issn=2474-9567&rft.eissn=2474-9567&rft_id=info:doi/10.1145/3494988&rft_dat=%3Cacm_cross%3E3494988%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |