Bias in Emotion Recognition with ChatGPT
This technical report explores the ability of ChatGPT in recognizing emotions from text, which can be the basis of various applications like interactive chatbots, data annotation, and mental health analysis. While prior research has shown ChatGPT's basic ability in sentiment analysis, its perfo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This technical report explores the ability of ChatGPT in recognizing emotions
from text, which can be the basis of various applications like interactive
chatbots, data annotation, and mental health analysis. While prior research has
shown ChatGPT's basic ability in sentiment analysis, its performance in more
nuanced emotion recognition is not yet explored. Here, we conducted experiments
to evaluate its performance of emotion recognition across different datasets
and emotion labels. Our findings indicate a reasonable level of reproducibility
in its performance, with noticeable improvement through fine-tuning. However,
the performance varies with different emotion labels and datasets, highlighting
an inherent instability and possible bias. The choice of dataset and emotion
labels significantly impacts ChatGPT's emotion recognition performance. This
paper sheds light on the importance of dataset and label selection, and the
potential of fine-tuning in enhancing ChatGPT's emotion recognition
capabilities, providing a groundwork for better integration of emotion analysis
in applications using ChatGPT. |
---|---|
DOI: | 10.48550/arxiv.2310.11753 |