AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings

Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the condi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2019-01, Vol.10 (1), p.7-17
Hauptverfasser: McDuff, Daniel, Amr, May, Kaliouby, Rana el
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 17
container_issue 1
container_start_page 7
container_title IEEE transactions on affective computing
container_volume 10
creator McDuff, Daniel
Amr, May
Kaliouby, Rana el
description Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: https://www.affectiva.com/facial-expression-dataset/
doi_str_mv 10.1109/TAFFC.2018.2801311
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2191150526</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8279433</ieee_id><sourcerecordid>2191150526</sourcerecordid><originalsourceid>FETCH-LOGICAL-c210t-243a27a64fedb034bb140d900bec693d6316f93ec4a23edc0639557547f850aa3</originalsourceid><addsrcrecordid>eNpNkE1LAzEQhoMoWLR_QC8Bj7I1X_sRb0vbVaHqwXo1ZLOzkrLu1iQV--9NbRHnMnN4n3fgQeiCkgmlRN4sy6qaThihxYQVhHJKj9CISiETTkR6_O8-RWPvVyQO5zxj-Qi9lY9JNZ9d3-Kyx_PvAH0DDZ7poD0EPLT4SYeN0531wRpcaWN1F3NrB97bofd4OnQdmBAhGwu-wG0bvcUvEILt3_05Oml152F82GfotZovp_fJ4vnuYVouEsMoCQkTXLNcZ6KFpiZc1DUVpJGE1GAyyZuM06yVHIzQjENjSMZlmuapyNsiJVrzM3S171274XMDPqjVsHF9fKkYlZSmJGVZTLF9yrjBewetWjv7od1WUaJ2KtWvSrVTqQ4qI3S5hywA_AEFy6WIEn8Aygxt3Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2191150526</pqid></control><display><type>article</type><title>AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings</title><source>IEEE Electronic Library (IEL)</source><creator>McDuff, Daniel ; Amr, May ; Kaliouby, Rana el</creator><creatorcontrib>McDuff, Daniel ; Amr, May ; Kaliouby, Rana el</creatorcontrib><description>Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: https://www.affectiva.com/facial-expression-dataset/</description><identifier>ISSN: 1949-3045</identifier><identifier>EISSN: 1949-3045</identifier><identifier>DOI: 10.1109/TAFFC.2018.2801311</identifier><identifier>CODEN: ITACBQ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Automation ; Coding ; corpora ; dataset ; Datasets ; Downloading ; Encoding ; Face recognition ; facial action coding system ; Facial expressions ; Lighting ; Task analysis ; Training ; Videos</subject><ispartof>IEEE transactions on affective computing, 2019-01, Vol.10 (1), p.7-17</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c210t-243a27a64fedb034bb140d900bec693d6316f93ec4a23edc0639557547f850aa3</citedby><cites>FETCH-LOGICAL-c210t-243a27a64fedb034bb140d900bec693d6316f93ec4a23edc0639557547f850aa3</cites><orcidid>0000-0001-7313-0082</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8279433$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8279433$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>McDuff, Daniel</creatorcontrib><creatorcontrib>Amr, May</creatorcontrib><creatorcontrib>Kaliouby, Rana el</creatorcontrib><title>AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings</title><title>IEEE transactions on affective computing</title><addtitle>T-AFFC</addtitle><description>Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: https://www.affectiva.com/facial-expression-dataset/</description><subject>Automation</subject><subject>Coding</subject><subject>corpora</subject><subject>dataset</subject><subject>Datasets</subject><subject>Downloading</subject><subject>Encoding</subject><subject>Face recognition</subject><subject>facial action coding system</subject><subject>Facial expressions</subject><subject>Lighting</subject><subject>Task analysis</subject><subject>Training</subject><subject>Videos</subject><issn>1949-3045</issn><issn>1949-3045</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1LAzEQhoMoWLR_QC8Bj7I1X_sRb0vbVaHqwXo1ZLOzkrLu1iQV--9NbRHnMnN4n3fgQeiCkgmlRN4sy6qaThihxYQVhHJKj9CISiETTkR6_O8-RWPvVyQO5zxj-Qi9lY9JNZ9d3-Kyx_PvAH0DDZ7poD0EPLT4SYeN0531wRpcaWN1F3NrB97bofd4OnQdmBAhGwu-wG0bvcUvEILt3_05Oml152F82GfotZovp_fJ4vnuYVouEsMoCQkTXLNcZ6KFpiZc1DUVpJGE1GAyyZuM06yVHIzQjENjSMZlmuapyNsiJVrzM3S171274XMDPqjVsHF9fKkYlZSmJGVZTLF9yrjBewetWjv7od1WUaJ2KtWvSrVTqQ4qI3S5hywA_AEFy6WIEn8Aygxt3Q</recordid><startdate>201901</startdate><enddate>201901</enddate><creator>McDuff, Daniel</creator><creator>Amr, May</creator><creator>Kaliouby, Rana el</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-7313-0082</orcidid></search><sort><creationdate>201901</creationdate><title>AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings</title><author>McDuff, Daniel ; Amr, May ; Kaliouby, Rana el</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c210t-243a27a64fedb034bb140d900bec693d6316f93ec4a23edc0639557547f850aa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Automation</topic><topic>Coding</topic><topic>corpora</topic><topic>dataset</topic><topic>Datasets</topic><topic>Downloading</topic><topic>Encoding</topic><topic>Face recognition</topic><topic>facial action coding system</topic><topic>Facial expressions</topic><topic>Lighting</topic><topic>Task analysis</topic><topic>Training</topic><topic>Videos</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>McDuff, Daniel</creatorcontrib><creatorcontrib>Amr, May</creatorcontrib><creatorcontrib>Kaliouby, Rana el</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on affective computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>McDuff, Daniel</au><au>Amr, May</au><au>Kaliouby, Rana el</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings</atitle><jtitle>IEEE transactions on affective computing</jtitle><stitle>T-AFFC</stitle><date>2019-01</date><risdate>2019</risdate><volume>10</volume><issue>1</issue><spage>7</spage><epage>17</epage><pages>7-17</pages><issn>1949-3045</issn><eissn>1949-3045</eissn><coden>ITACBQ</coden><abstract>Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos. The dataset is available to download online: https://www.affectiva.com/facial-expression-dataset/</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TAFFC.2018.2801311</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-7313-0082</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1949-3045
ispartof IEEE transactions on affective computing, 2019-01, Vol.10 (1), p.7-17
issn 1949-3045
1949-3045
language eng
recordid cdi_proquest_journals_2191150526
source IEEE Electronic Library (IEL)
subjects Automation
Coding
corpora
dataset
Datasets
Downloading
Encoding
Face recognition
facial action coding system
Facial expressions
Lighting
Task analysis
Training
Videos
title AM-FED+: An Extended Dataset of Naturalistic Facial Expressions Collected in Everyday Settings
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T15%3A42%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AM-FED+:%20An%20Extended%20Dataset%20of%20Naturalistic%20Facial%20Expressions%20Collected%20in%20Everyday%20Settings&rft.jtitle=IEEE%20transactions%20on%20affective%20computing&rft.au=McDuff,%20Daniel&rft.date=2019-01&rft.volume=10&rft.issue=1&rft.spage=7&rft.epage=17&rft.pages=7-17&rft.issn=1949-3045&rft.eissn=1949-3045&rft.coden=ITACBQ&rft_id=info:doi/10.1109/TAFFC.2018.2801311&rft_dat=%3Cproquest_RIE%3E2191150526%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2191150526&rft_id=info:pmid/&rft_ieee_id=8279433&rfr_iscdi=true