Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning
The task of classifying emotions within a musical track has received widespread attention within the Music Information Retrieval (MIR) community. Music emotion recognition has traditionally relied on the use of acoustic features, verbal features, and metadata-based filtering. The role of musical pro...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The task of classifying emotions within a musical track has received
widespread attention within the Music Information Retrieval (MIR) community.
Music emotion recognition has traditionally relied on the use of acoustic
features, verbal features, and metadata-based filtering. The role of musical
prosody remains under-explored despite several studies demonstrating a strong
connection between prosody and emotion. In this study, we restrict the input of
traditional machine learning algorithms to the features of musical prosody.
Furthermore, our proposed approach builds upon the prior by classifying
emotions under an expanded emotional taxonomy, using the Geneva Wheel of
Emotion. We utilize a methodology for individual data collection from
vocalists, and personal ground truth labeling by the artist themselves. We
found that traditional machine learning algorithms when limited to the features
of musical prosody (1) achieve high accuracies for a single singer, (2)
maintain high accuracy when the dataset is expanded to multiple singers, and
(3) achieve high accuracies when trained on a reduced subset of the total
features. |
---|---|
DOI: | 10.48550/arxiv.2106.02556 |