ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild

Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2020-01, Vol.128 (1), p.1-25
Hauptverfasser: Luo, Yu, Ye, Jianbo, Adams, Reginald B., Li, Jia, Newman, Michelle G., Wang, James Z.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 25
container_issue 1
container_start_page 1
container_title International journal of computer vision
container_volume 128
creator Luo, Yu
Ye, Jianbo
Adams, Reginald B.
Li, Jia
Newman, Michelle G.
Wang, James Z.
description Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.
doi_str_mv 10.1007/s11263-019-01215-y
format Article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7928531</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A612850758</galeid><sourcerecordid>A612850758</sourcerecordid><originalsourceid>FETCH-LOGICAL-c503t-4bf18874a2a502c860e52eb025352fbc625e8ce6f7152815e6cff454f1e21dcd3</originalsourceid><addsrcrecordid>eNp9kk1v1DAQhi0EokvhD3BAkbjAIcUfseNwQNpWASpVAi1FHC2vM966SuytnUD332OapbAckGVZmnnmHc34Reg5wScE4_pNIoQKVmLS5EsJL3cP0ILwmpWkwvwhWuCG4pKLhhyhJyldY4yppOwxOmJMiIpztkCfl6vTtn1bXIYfOnapWE5jGPQIXbECEzbejS74ItjiNHSu3xXt7TZCSvtgO4S7vPPFeAXFN9d3T9Ejq_sEz_bvMfr6vr08-1hefPpwfra8KA3HbCyrtSVS1pWmmmNqpMDAKawx5YxTuzaCcpAGhK0Jp5JwEMbaileWACWd6dgxejfrbqf1AJ0BP0bdq210g447FbRThxnvrtQmfFd1QyVnJAu82gvEcDNBGtXgkoG-1x7ClBStGllJ2eAqoy__Qa_DFH0eT9G8USlFI3GmTmZqo3tQztuQ-5p8OhicCR6sy_GlILk_rrnMBa8PCjIzwu240VNK6vzL6pClM2tiSCmCvZ-UYPXLDWp2g8puUHduULtc9OLvHd2X_P7-DLAZSDnlNxD_DPYf2Z-yPL6R</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2282886980</pqid></control><display><type>article</type><title>ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild</title><source>SpringerLink Journals - AutoHoldings</source><creator>Luo, Yu ; Ye, Jianbo ; Adams, Reginald B. ; Li, Jia ; Newman, Michelle G. ; Wang, James Z.</creator><creatorcontrib>Luo, Yu ; Ye, Jianbo ; Adams, Reginald B. ; Li, Jia ; Newman, Michelle G. ; Wang, James Z.</creatorcontrib><description>Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-019-01215-y</identifier><identifier>PMID: 33664553</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Arousal ; Artificial Intelligence ; Automation ; Computer Imaging ; Computer Science ; Computer simulation ; Crowdsourcing ; Datasets ; Emotions ; Human communication ; Image Processing and Computer Vision ; Internet videos ; Moving object recognition ; Object recognition ; Pattern Recognition ; Pattern Recognition and Graphics ; Psychology ; Robotics ; Robots ; Statistical analysis ; Vision</subject><ispartof>International journal of computer vision, 2020-01, Vol.128 (1), p.1-25</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2019</rights><rights>COPYRIGHT 2020 Springer</rights><rights>International Journal of Computer Vision is a copyright of Springer, (2019). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c503t-4bf18874a2a502c860e52eb025352fbc625e8ce6f7152815e6cff454f1e21dcd3</citedby><cites>FETCH-LOGICAL-c503t-4bf18874a2a502c860e52eb025352fbc625e8ce6f7152815e6cff454f1e21dcd3</cites><orcidid>0000-0001-7410-4417</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-019-01215-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-019-01215-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,41488,42557,51319</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33664553$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Luo, Yu</creatorcontrib><creatorcontrib>Ye, Jianbo</creatorcontrib><creatorcontrib>Adams, Reginald B.</creatorcontrib><creatorcontrib>Li, Jia</creatorcontrib><creatorcontrib>Newman, Michelle G.</creatorcontrib><creatorcontrib>Wang, James Z.</creatorcontrib><title>ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><addtitle>Int J Comput Vis</addtitle><description>Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.</description><subject>Arousal</subject><subject>Artificial Intelligence</subject><subject>Automation</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer simulation</subject><subject>Crowdsourcing</subject><subject>Datasets</subject><subject>Emotions</subject><subject>Human communication</subject><subject>Image Processing and Computer Vision</subject><subject>Internet videos</subject><subject>Moving object recognition</subject><subject>Object recognition</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Psychology</subject><subject>Robotics</subject><subject>Robots</subject><subject>Statistical analysis</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kk1v1DAQhi0EokvhD3BAkbjAIcUfseNwQNpWASpVAi1FHC2vM966SuytnUD332OapbAckGVZmnnmHc34Reg5wScE4_pNIoQKVmLS5EsJL3cP0ILwmpWkwvwhWuCG4pKLhhyhJyldY4yppOwxOmJMiIpztkCfl6vTtn1bXIYfOnapWE5jGPQIXbECEzbejS74ItjiNHSu3xXt7TZCSvtgO4S7vPPFeAXFN9d3T9Ejq_sEz_bvMfr6vr08-1hefPpwfra8KA3HbCyrtSVS1pWmmmNqpMDAKawx5YxTuzaCcpAGhK0Jp5JwEMbaileWACWd6dgxejfrbqf1AJ0BP0bdq210g447FbRThxnvrtQmfFd1QyVnJAu82gvEcDNBGtXgkoG-1x7ClBStGllJ2eAqoy__Qa_DFH0eT9G8USlFI3GmTmZqo3tQztuQ-5p8OhicCR6sy_GlILk_rrnMBa8PCjIzwu240VNK6vzL6pClM2tiSCmCvZ-UYPXLDWp2g8puUHduULtc9OLvHd2X_P7-DLAZSDnlNxD_DPYf2Z-yPL6R</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Luo, Yu</creator><creator>Ye, Jianbo</creator><creator>Adams, Reginald B.</creator><creator>Li, Jia</creator><creator>Newman, Michelle G.</creator><creator>Wang, James Z.</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-7410-4417</orcidid></search><sort><creationdate>20200101</creationdate><title>ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild</title><author>Luo, Yu ; Ye, Jianbo ; Adams, Reginald B. ; Li, Jia ; Newman, Michelle G. ; Wang, James Z.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c503t-4bf18874a2a502c860e52eb025352fbc625e8ce6f7152815e6cff454f1e21dcd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Arousal</topic><topic>Artificial Intelligence</topic><topic>Automation</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer simulation</topic><topic>Crowdsourcing</topic><topic>Datasets</topic><topic>Emotions</topic><topic>Human communication</topic><topic>Image Processing and Computer Vision</topic><topic>Internet videos</topic><topic>Moving object recognition</topic><topic>Object recognition</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Psychology</topic><topic>Robotics</topic><topic>Robots</topic><topic>Statistical analysis</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Luo, Yu</creatorcontrib><creatorcontrib>Ye, Jianbo</creatorcontrib><creatorcontrib>Adams, Reginald B.</creatorcontrib><creatorcontrib>Li, Jia</creatorcontrib><creatorcontrib>Newman, Michelle G.</creatorcontrib><creatorcontrib>Wang, James Z.</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Luo, Yu</au><au>Ye, Jianbo</au><au>Adams, Reginald B.</au><au>Li, Jia</au><au>Newman, Michelle G.</au><au>Wang, James Z.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><addtitle>Int J Comput Vis</addtitle><date>2020-01-01</date><risdate>2020</risdate><volume>128</volume><issue>1</issue><spage>1</spage><epage>25</epage><pages>1-25</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.</abstract><cop>New York</cop><pub>Springer US</pub><pmid>33664553</pmid><doi>10.1007/s11263-019-01215-y</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0001-7410-4417</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2020-01, Vol.128 (1), p.1-25
issn 0920-5691
1573-1405
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7928531
source SpringerLink Journals - AutoHoldings
subjects Arousal
Artificial Intelligence
Automation
Computer Imaging
Computer Science
Computer simulation
Crowdsourcing
Datasets
Emotions
Human communication
Image Processing and Computer Vision
Internet videos
Moving object recognition
Object recognition
Pattern Recognition
Pattern Recognition and Graphics
Psychology
Robotics
Robots
Statistical analysis
Vision
title ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T19%3A26%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ARBEE:%20Towards%20Automated%20Recognition%20of%20Bodily%20Expression%20of%20Emotion%20in%20the%20Wild&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Luo,%20Yu&rft.date=2020-01-01&rft.volume=128&rft.issue=1&rft.spage=1&rft.epage=25&rft.pages=1-25&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-019-01215-y&rft_dat=%3Cgale_pubme%3EA612850758%3C/gale_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2282886980&rft_id=info:pmid/33664553&rft_galeid=A612850758&rfr_iscdi=true