Video Affective Content Analysis: A Survey of State-of-the-Art Methods
Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on affective computing 2015-10, Vol.6 (4), p.410-430 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 430 |
---|---|
container_issue | 4 |
container_start_page | 410 |
container_title | IEEE transactions on affective computing |
container_volume | 6 |
creator | Wang, Shangfei Ji, Qiang |
description | Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research. |
doi_str_mv | 10.1109/TAFFC.2015.2432791 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_7106468</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7106468</ieee_id><sourcerecordid>10_1109_TAFFC_2015_2432791</sourcerecordid><originalsourceid>FETCH-LOGICAL-c382t-da5fded3f26873d78b0b38e131edf6d3c2550e20713e567fde3c5ca1815ce8e13</originalsourceid><addsrcrecordid>eNpNkEFOwzAQRS0EElXpBWDjC7h4PHGcsIsiQpGKWLSwjVJ7rAaVBMWmUm9PQyvEbP4s_vuLx9gtyDmAzO_XRVWVcyVBz1WCyuRwwSaQJ7lAmejLf_81m4XwIY-HiKkyE1a9t456XnhPNrZ74mXfReoiL7pmdwhteOAFX30Pezrw3vNVbCKJ3ou4JVEMkb9Q3PYu3LAr3-wCzc45ZW_V47pciOXr03NZLIXFTEXhGu0dOfQqzQw6k23kBjMCBHI-dWiV1pKUNICkU3PsotW2gQy0pbE3Zeq0a4c-hIF8_TW0n81wqEHWo4z6V0Y9yqjPMo7Q3QlqiegPMCDTJM3wB5O2Wn8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Video Affective Content Analysis: A Survey of State-of-the-Art Methods</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Shangfei ; Ji, Qiang</creator><creatorcontrib>Wang, Shangfei ; Ji, Qiang</creatorcontrib><description>Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.</description><identifier>ISSN: 1949-3045</identifier><identifier>EISSN: 1949-3045</identifier><identifier>DOI: 10.1109/TAFFC.2015.2432791</identifier><identifier>CODEN: ITACBQ</identifier><language>eng</language><publisher>IEEE</publisher><subject>Content analysis ; content-based video retrieval ; emotion recognition ; Feature extraction ; Image color analysis ; Mel frequency cepstral coefficient ; Speech processing ; Video affective content analysis ; Video retrieval</subject><ispartof>IEEE transactions on affective computing, 2015-10, Vol.6 (4), p.410-430</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c382t-da5fded3f26873d78b0b38e131edf6d3c2550e20713e567fde3c5ca1815ce8e13</citedby><cites>FETCH-LOGICAL-c382t-da5fded3f26873d78b0b38e131edf6d3c2550e20713e567fde3c5ca1815ce8e13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7106468$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7106468$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Shangfei</creatorcontrib><creatorcontrib>Ji, Qiang</creatorcontrib><title>Video Affective Content Analysis: A Survey of State-of-the-Art Methods</title><title>IEEE transactions on affective computing</title><addtitle>T-AFFC</addtitle><description>Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.</description><subject>Content analysis</subject><subject>content-based video retrieval</subject><subject>emotion recognition</subject><subject>Feature extraction</subject><subject>Image color analysis</subject><subject>Mel frequency cepstral coefficient</subject><subject>Speech processing</subject><subject>Video affective content analysis</subject><subject>Video retrieval</subject><issn>1949-3045</issn><issn>1949-3045</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEFOwzAQRS0EElXpBWDjC7h4PHGcsIsiQpGKWLSwjVJ7rAaVBMWmUm9PQyvEbP4s_vuLx9gtyDmAzO_XRVWVcyVBz1WCyuRwwSaQJ7lAmejLf_81m4XwIY-HiKkyE1a9t456XnhPNrZ74mXfReoiL7pmdwhteOAFX30Pezrw3vNVbCKJ3ou4JVEMkb9Q3PYu3LAr3-wCzc45ZW_V47pciOXr03NZLIXFTEXhGu0dOfQqzQw6k23kBjMCBHI-dWiV1pKUNICkU3PsotW2gQy0pbE3Zeq0a4c-hIF8_TW0n81wqEHWo4z6V0Y9yqjPMo7Q3QlqiegPMCDTJM3wB5O2Wn8</recordid><startdate>201510</startdate><enddate>201510</enddate><creator>Wang, Shangfei</creator><creator>Ji, Qiang</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>201510</creationdate><title>Video Affective Content Analysis: A Survey of State-of-the-Art Methods</title><author>Wang, Shangfei ; Ji, Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c382t-da5fded3f26873d78b0b38e131edf6d3c2550e20713e567fde3c5ca1815ce8e13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Content analysis</topic><topic>content-based video retrieval</topic><topic>emotion recognition</topic><topic>Feature extraction</topic><topic>Image color analysis</topic><topic>Mel frequency cepstral coefficient</topic><topic>Speech processing</topic><topic>Video affective content analysis</topic><topic>Video retrieval</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Shangfei</creatorcontrib><creatorcontrib>Ji, Qiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on affective computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Shangfei</au><au>Ji, Qiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Video Affective Content Analysis: A Survey of State-of-the-Art Methods</atitle><jtitle>IEEE transactions on affective computing</jtitle><stitle>T-AFFC</stitle><date>2015-10</date><risdate>2015</risdate><volume>6</volume><issue>4</issue><spage>410</spage><epage>430</epage><pages>410-430</pages><issn>1949-3045</issn><eissn>1949-3045</eissn><coden>ITACBQ</coden><abstract>Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.</abstract><pub>IEEE</pub><doi>10.1109/TAFFC.2015.2432791</doi><tpages>21</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1949-3045 |
ispartof | IEEE transactions on affective computing, 2015-10, Vol.6 (4), p.410-430 |
issn | 1949-3045 1949-3045 |
language | eng |
recordid | cdi_ieee_primary_7106468 |
source | IEEE Electronic Library (IEL) |
subjects | Content analysis content-based video retrieval emotion recognition Feature extraction Image color analysis Mel frequency cepstral coefficient Speech processing Video affective content analysis Video retrieval |
title | Video Affective Content Analysis: A Survey of State-of-the-Art Methods |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T16%3A44%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Video%20Affective%20Content%20Analysis:%20A%20Survey%20of%20State-of-the-Art%20Methods&rft.jtitle=IEEE%20transactions%20on%20affective%20computing&rft.au=Wang,%20Shangfei&rft.date=2015-10&rft.volume=6&rft.issue=4&rft.spage=410&rft.epage=430&rft.pages=410-430&rft.issn=1949-3045&rft.eissn=1949-3045&rft.coden=ITACBQ&rft_id=info:doi/10.1109/TAFFC.2015.2432791&rft_dat=%3Ccrossref_RIE%3E10_1109_TAFFC_2015_2432791%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=7106468&rfr_iscdi=true |