GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition
Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual capabilities across various tasks, but its performance in emotion recognition has not been fully evaluated. To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks:...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lian, Zheng Sun, Licai Sun, Haiyang Chen, Kang Wen, Zhuofan Gu, Hao Liu, Bin Tao, Jianhua |
description | Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual
capabilities across various tasks, but its performance in emotion recognition
has not been fully evaluated. To bridge this gap, we present the quantitative
evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual
sentiment analysis, tweet sentiment analysis, micro-expression recognition,
facial emotion recognition, dynamic facial emotion recognition, and multimodal
emotion recognition. This paper collectively refers to these tasks as
``Generalized Emotion Recognition (GER)''. Through experimental analysis, we
observe that GPT-4V exhibits strong visual understanding capabilities in GER
tasks. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and
exploit temporal information, which is also critical for emotion recognition.
However, it's worth noting that GPT-4V is primarily designed for general
domains and cannot recognize micro-expressions that require specialized
knowledge. To the best of our knowledge, this paper provides the first
quantitative assessment of GPT-4V for GER tasks. We have open-sourced the code
and encourage subsequent researchers to broaden the evaluation scope by
including more tasks and datasets. Our code and evaluation results are
available at: https://github.com/zeroQiaoba/gpt4v-emotion. |
doi_str_mv | 10.48550/arxiv.2312.04293 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_04293</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_04293</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2312_042933</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjY00jMwMbI05mTwcw8I0TUJUyjPLMlQcM3NL8nMz7NScFSISi3K1y3OyC9RcErNS87ITSzKVkjLL1JwT81LLUrMyaxKTYEpVwhKTc5Pz8sEsXkYWNMSc4pTeaE0N4O8m2uIs4cu2Ob4gqJMoEmV8SAXxINdYExYBQDCpjrH</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition</title><source>arXiv.org</source><creator>Lian, Zheng ; Sun, Licai ; Sun, Haiyang ; Chen, Kang ; Wen, Zhuofan ; Gu, Hao ; Liu, Bin ; Tao, Jianhua</creator><creatorcontrib>Lian, Zheng ; Sun, Licai ; Sun, Haiyang ; Chen, Kang ; Wen, Zhuofan ; Gu, Hao ; Liu, Bin ; Tao, Jianhua</creatorcontrib><description>Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual
capabilities across various tasks, but its performance in emotion recognition
has not been fully evaluated. To bridge this gap, we present the quantitative
evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual
sentiment analysis, tweet sentiment analysis, micro-expression recognition,
facial emotion recognition, dynamic facial emotion recognition, and multimodal
emotion recognition. This paper collectively refers to these tasks as
``Generalized Emotion Recognition (GER)''. Through experimental analysis, we
observe that GPT-4V exhibits strong visual understanding capabilities in GER
tasks. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and
exploit temporal information, which is also critical for emotion recognition.
However, it's worth noting that GPT-4V is primarily designed for general
domains and cannot recognize micro-expressions that require specialized
knowledge. To the best of our knowledge, this paper provides the first
quantitative assessment of GPT-4V for GER tasks. We have open-sourced the code
and encourage subsequent researchers to broaden the evaluation scope by
including more tasks and datasets. Our code and evaluation results are
available at: https://github.com/zeroQiaoba/gpt4v-emotion.</description><identifier>DOI: 10.48550/arxiv.2312.04293</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.04293$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.04293$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lian, Zheng</creatorcontrib><creatorcontrib>Sun, Licai</creatorcontrib><creatorcontrib>Sun, Haiyang</creatorcontrib><creatorcontrib>Chen, Kang</creatorcontrib><creatorcontrib>Wen, Zhuofan</creatorcontrib><creatorcontrib>Gu, Hao</creatorcontrib><creatorcontrib>Liu, Bin</creatorcontrib><creatorcontrib>Tao, Jianhua</creatorcontrib><title>GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition</title><description>Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual
capabilities across various tasks, but its performance in emotion recognition
has not been fully evaluated. To bridge this gap, we present the quantitative
evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual
sentiment analysis, tweet sentiment analysis, micro-expression recognition,
facial emotion recognition, dynamic facial emotion recognition, and multimodal
emotion recognition. This paper collectively refers to these tasks as
``Generalized Emotion Recognition (GER)''. Through experimental analysis, we
observe that GPT-4V exhibits strong visual understanding capabilities in GER
tasks. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and
exploit temporal information, which is also critical for emotion recognition.
However, it's worth noting that GPT-4V is primarily designed for general
domains and cannot recognize micro-expressions that require specialized
knowledge. To the best of our knowledge, this paper provides the first
quantitative assessment of GPT-4V for GER tasks. We have open-sourced the code
and encourage subsequent researchers to broaden the evaluation scope by
including more tasks and datasets. Our code and evaluation results are
available at: https://github.com/zeroQiaoba/gpt4v-emotion.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjY00jMwMbI05mTwcw8I0TUJUyjPLMlQcM3NL8nMz7NScFSISi3K1y3OyC9RcErNS87ITSzKVkjLL1JwT81LLUrMyaxKTYEpVwhKTc5Pz8sEsXkYWNMSc4pTeaE0N4O8m2uIs4cu2Ob4gqJMoEmV8SAXxINdYExYBQDCpjrH</recordid><startdate>20231207</startdate><enddate>20231207</enddate><creator>Lian, Zheng</creator><creator>Sun, Licai</creator><creator>Sun, Haiyang</creator><creator>Chen, Kang</creator><creator>Wen, Zhuofan</creator><creator>Gu, Hao</creator><creator>Liu, Bin</creator><creator>Tao, Jianhua</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231207</creationdate><title>GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition</title><author>Lian, Zheng ; Sun, Licai ; Sun, Haiyang ; Chen, Kang ; Wen, Zhuofan ; Gu, Hao ; Liu, Bin ; Tao, Jianhua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2312_042933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Lian, Zheng</creatorcontrib><creatorcontrib>Sun, Licai</creatorcontrib><creatorcontrib>Sun, Haiyang</creatorcontrib><creatorcontrib>Chen, Kang</creatorcontrib><creatorcontrib>Wen, Zhuofan</creatorcontrib><creatorcontrib>Gu, Hao</creatorcontrib><creatorcontrib>Liu, Bin</creatorcontrib><creatorcontrib>Tao, Jianhua</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lian, Zheng</au><au>Sun, Licai</au><au>Sun, Haiyang</au><au>Chen, Kang</au><au>Wen, Zhuofan</au><au>Gu, Hao</au><au>Liu, Bin</au><au>Tao, Jianhua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition</atitle><date>2023-12-07</date><risdate>2023</risdate><abstract>Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual
capabilities across various tasks, but its performance in emotion recognition
has not been fully evaluated. To bridge this gap, we present the quantitative
evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual
sentiment analysis, tweet sentiment analysis, micro-expression recognition,
facial emotion recognition, dynamic facial emotion recognition, and multimodal
emotion recognition. This paper collectively refers to these tasks as
``Generalized Emotion Recognition (GER)''. Through experimental analysis, we
observe that GPT-4V exhibits strong visual understanding capabilities in GER
tasks. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and
exploit temporal information, which is also critical for emotion recognition.
However, it's worth noting that GPT-4V is primarily designed for general
domains and cannot recognize micro-expressions that require specialized
knowledge. To the best of our knowledge, this paper provides the first
quantitative assessment of GPT-4V for GER tasks. We have open-sourced the code
and encourage subsequent researchers to broaden the evaluation scope by
including more tasks and datasets. Our code and evaluation results are
available at: https://github.com/zeroQiaoba/gpt4v-emotion.</abstract><doi>10.48550/arxiv.2312.04293</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.04293 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_04293 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Multimedia |
title | GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T01%3A24%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GPT-4V%20with%20Emotion:%20A%20Zero-shot%20Benchmark%20for%20Generalized%20Emotion%20Recognition&rft.au=Lian,%20Zheng&rft.date=2023-12-07&rft_id=info:doi/10.48550/arxiv.2312.04293&rft_dat=%3Carxiv_GOX%3E2312_04293%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |