The painful face – Pain expression recognition using active appearance models
Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid...
Gespeichert in:
Veröffentlicht in: | Image and vision computing 2009-11, Vol.27 (12), p.1788-1796 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1796 |
---|---|
container_issue | 12 |
container_start_page | 1788 |
container_title | Image and vision computing |
container_volume | 27 |
creator | Ashraf, Ahmed Bilal Lucey, Simon Cohn, Jeffrey F. Chen, Tsuhan Ambadar, Zara Prkachin, Kenneth M. Solomon, Patricia E. |
description | Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face? |
doi_str_mv | 10.1016/j.imavis.2009.05.007 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_3402903</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0262885609000985</els_id><sourcerecordid>34942945</sourcerecordid><originalsourceid>FETCH-LOGICAL-c560t-b15aeb8bbdb90367dcfefde565e62651fe16395897fe736884beb7045e3d31093</originalsourceid><addsrcrecordid>eNp9kc9O3DAQxi1UBAvlDVCVU8VlwziO_-SChFApSEhwoGfLcSaLV9k4tZMV3PoOvCFPUq8WaLlw8sj-vhnP9yPkmEJOgYrTZe5WZu1iXgBUOfAcQO6QGVWymCvK1Bcyg0KkWnGxTw5iXEJSgKz2yH5RKCa5kjNye_-A2WBc305d1hqL2cuf5-wuXWT4OASM0fk-C2j9onfjpp6i6xeZsaNbY2aGAU0wffKtfINd_Ep2W9NFPHo9D8mvyx_3F1fzm9uf1xfnN3PLBYzzmnKDtarrpq6ACdnYFtsGueAoCsFpi1SwiqtKtiiZUKqssZZQcmQNo1CxQ3K27TtM9Qobi_0YTKeHkEIJT9obpz--9O5BL_xasxKKNDI1-P7aIPjfE8ZRr1y02HWmRz_FpKvKoip5Ep58KqSKcc5BcZWk5VZqg48xYPv-Hwp6A00v9Raa3kDTwHVCkmzf_t_l3fRG6d-yKWBcOww6Wocp9MYlMqNuvPt8wl8HFazy</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1835550858</pqid></control><display><type>article</type><title>The painful face – Pain expression recognition using active appearance models</title><source>Elsevier ScienceDirect Journals Complete</source><creator>Ashraf, Ahmed Bilal ; Lucey, Simon ; Cohn, Jeffrey F. ; Chen, Tsuhan ; Ambadar, Zara ; Prkachin, Kenneth M. ; Solomon, Patricia E.</creator><creatorcontrib>Ashraf, Ahmed Bilal ; Lucey, Simon ; Cohn, Jeffrey F. ; Chen, Tsuhan ; Ambadar, Zara ; Prkachin, Kenneth M. ; Solomon, Patricia E.</creatorcontrib><description>Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?</description><identifier>ISSN: 0262-8856</identifier><identifier>EISSN: 1872-8138</identifier><identifier>DOI: 10.1016/j.imavis.2009.05.007</identifier><identifier>PMID: 22837587</identifier><language>eng</language><publisher>England: Elsevier B.V</publisher><subject>Active appearance models ; Automatic facial image analysis ; Facial expression ; FACS ; Pain ; Support vector machines</subject><ispartof>Image and vision computing, 2009-11, Vol.27 (12), p.1788-1796</ispartof><rights>2009 Elsevier B.V.</rights><rights>2009 Elsevier B.V. All rights reserved. 2009</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c560t-b15aeb8bbdb90367dcfefde565e62651fe16395897fe736884beb7045e3d31093</citedby><cites>FETCH-LOGICAL-c560t-b15aeb8bbdb90367dcfefde565e62651fe16395897fe736884beb7045e3d31093</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.imavis.2009.05.007$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>230,314,780,784,885,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/22837587$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ashraf, Ahmed Bilal</creatorcontrib><creatorcontrib>Lucey, Simon</creatorcontrib><creatorcontrib>Cohn, Jeffrey F.</creatorcontrib><creatorcontrib>Chen, Tsuhan</creatorcontrib><creatorcontrib>Ambadar, Zara</creatorcontrib><creatorcontrib>Prkachin, Kenneth M.</creatorcontrib><creatorcontrib>Solomon, Patricia E.</creatorcontrib><title>The painful face – Pain expression recognition using active appearance models</title><title>Image and vision computing</title><addtitle>Image Vis Comput</addtitle><description>Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?</description><subject>Active appearance models</subject><subject>Automatic facial image analysis</subject><subject>Facial expression</subject><subject>FACS</subject><subject>Pain</subject><subject>Support vector machines</subject><issn>0262-8856</issn><issn>1872-8138</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2009</creationdate><recordtype>article</recordtype><recordid>eNp9kc9O3DAQxi1UBAvlDVCVU8VlwziO_-SChFApSEhwoGfLcSaLV9k4tZMV3PoOvCFPUq8WaLlw8sj-vhnP9yPkmEJOgYrTZe5WZu1iXgBUOfAcQO6QGVWymCvK1Bcyg0KkWnGxTw5iXEJSgKz2yH5RKCa5kjNye_-A2WBc305d1hqL2cuf5-wuXWT4OASM0fk-C2j9onfjpp6i6xeZsaNbY2aGAU0wffKtfINd_Ep2W9NFPHo9D8mvyx_3F1fzm9uf1xfnN3PLBYzzmnKDtarrpq6ACdnYFtsGueAoCsFpi1SwiqtKtiiZUKqssZZQcmQNo1CxQ3K27TtM9Qobi_0YTKeHkEIJT9obpz--9O5BL_xasxKKNDI1-P7aIPjfE8ZRr1y02HWmRz_FpKvKoip5Ep58KqSKcc5BcZWk5VZqg48xYPv-Hwp6A00v9Raa3kDTwHVCkmzf_t_l3fRG6d-yKWBcOww6Wocp9MYlMqNuvPt8wl8HFazy</recordid><startdate>20091101</startdate><enddate>20091101</enddate><creator>Ashraf, Ahmed Bilal</creator><creator>Lucey, Simon</creator><creator>Cohn, Jeffrey F.</creator><creator>Chen, Tsuhan</creator><creator>Ambadar, Zara</creator><creator>Prkachin, Kenneth M.</creator><creator>Solomon, Patricia E.</creator><general>Elsevier B.V</general><scope>6I.</scope><scope>AAFTH</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>5PM</scope></search><sort><creationdate>20091101</creationdate><title>The painful face – Pain expression recognition using active appearance models</title><author>Ashraf, Ahmed Bilal ; Lucey, Simon ; Cohn, Jeffrey F. ; Chen, Tsuhan ; Ambadar, Zara ; Prkachin, Kenneth M. ; Solomon, Patricia E.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c560t-b15aeb8bbdb90367dcfefde565e62651fe16395897fe736884beb7045e3d31093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Active appearance models</topic><topic>Automatic facial image analysis</topic><topic>Facial expression</topic><topic>FACS</topic><topic>Pain</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ashraf, Ahmed Bilal</creatorcontrib><creatorcontrib>Lucey, Simon</creatorcontrib><creatorcontrib>Cohn, Jeffrey F.</creatorcontrib><creatorcontrib>Chen, Tsuhan</creatorcontrib><creatorcontrib>Ambadar, Zara</creatorcontrib><creatorcontrib>Prkachin, Kenneth M.</creatorcontrib><creatorcontrib>Solomon, Patricia E.</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Image and vision computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ashraf, Ahmed Bilal</au><au>Lucey, Simon</au><au>Cohn, Jeffrey F.</au><au>Chen, Tsuhan</au><au>Ambadar, Zara</au><au>Prkachin, Kenneth M.</au><au>Solomon, Patricia E.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The painful face – Pain expression recognition using active appearance models</atitle><jtitle>Image and vision computing</jtitle><addtitle>Image Vis Comput</addtitle><date>2009-11-01</date><risdate>2009</risdate><volume>27</volume><issue>12</issue><spage>1788</spage><epage>1796</epage><pages>1788-1796</pages><issn>0262-8856</issn><eissn>1872-8138</eissn><abstract>Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?</abstract><cop>England</cop><pub>Elsevier B.V</pub><pmid>22837587</pmid><doi>10.1016/j.imavis.2009.05.007</doi><tpages>9</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0262-8856 |
ispartof | Image and vision computing, 2009-11, Vol.27 (12), p.1788-1796 |
issn | 0262-8856 1872-8138 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_3402903 |
source | Elsevier ScienceDirect Journals Complete |
subjects | Active appearance models Automatic facial image analysis Facial expression FACS Pain Support vector machines |
title | The painful face – Pain expression recognition using active appearance models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T22%3A27%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20painful%20face%20%E2%80%93%20Pain%20expression%20recognition%20using%20active%20appearance%20models&rft.jtitle=Image%20and%20vision%20computing&rft.au=Ashraf,%20Ahmed%20Bilal&rft.date=2009-11-01&rft.volume=27&rft.issue=12&rft.spage=1788&rft.epage=1796&rft.pages=1788-1796&rft.issn=0262-8856&rft.eissn=1872-8138&rft_id=info:doi/10.1016/j.imavis.2009.05.007&rft_dat=%3Cproquest_pubme%3E34942945%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1835550858&rft_id=info:pmid/22837587&rft_els_id=S0262885609000985&rfr_iscdi=true |