What makes music musical? a framework for extracting performance expression and emotion in musical sound
We present a framework to provide a quantitative representation of aspects of musical sound that are associated with musical expressiveness and emotions. After a brief introduction to the background of expressive features in music, we introduce a score to audio mapping algorithm based on dynamic tim...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a framework to provide a quantitative representation of aspects of musical sound that are associated with musical expressiveness and emotions. After a brief introduction to the background of expressive features in music, we introduce a score to audio mapping algorithm based on dynamic time warping, which segments the audio by comparing it to a music score. Expressive feature extraction algorithms are then introduced. The algorithms extract an expressive feature set that includes pitch deviation, loudness, timbre, timing, articulation, and modulation from the segmented audio to construct an expressive feature database. We have demonstrated these tools in the context of solo western classical music, specifically for the solo oboe. We also discuss potential applications to music performance education and music "language" processing. |
---|---|
DOI: | 10.1109/DSP-SPE.2011.5739229 |