GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition
Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual capabilities across various tasks, but its performance in emotion recognition has not been fully evaluated. To bridge this gap, we present the quantitative evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks:...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual
capabilities across various tasks, but its performance in emotion recognition
has not been fully evaluated. To bridge this gap, we present the quantitative
evaluation results of GPT-4V on 21 benchmark datasets covering 6 tasks: visual
sentiment analysis, tweet sentiment analysis, micro-expression recognition,
facial emotion recognition, dynamic facial emotion recognition, and multimodal
emotion recognition. This paper collectively refers to these tasks as
``Generalized Emotion Recognition (GER)''. Through experimental analysis, we
observe that GPT-4V exhibits strong visual understanding capabilities in GER
tasks. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and
exploit temporal information, which is also critical for emotion recognition.
However, it's worth noting that GPT-4V is primarily designed for general
domains and cannot recognize micro-expressions that require specialized
knowledge. To the best of our knowledge, this paper provides the first
quantitative assessment of GPT-4V for GER tasks. We have open-sourced the code
and encourage subsequent researchers to broaden the evaluation scope by
including more tasks and datasets. Our code and evaluation results are
available at: https://github.com/zeroQiaoba/gpt4v-emotion. |
---|---|
DOI: | 10.48550/arxiv.2312.04293 |