A sample-level DCNN for music auto-tagging
Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional lay...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2021-03, Vol.80 (8), p.11459-11469 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional layer for extracting local feature and reducing temporal dimension, and residual block from WaveNet for preserving input resolution and extracting more complex features. In order to further improve performance, squeeze-and-excitation (SE) block is introduced to the residual block. Under the evaluation metric of Area Under Receiver Operating Characteristic Curve (AUC-ROC) score, experiment results on MagnaTagATune (MTAT) dataset show that the two proposed models achieve 91.47% and 92.76% respectively. Furthermore, our proposed models have slightly surpass the state-of-the-art model SampleCNN with SE block. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-020-10330-9 |