A sample-level DCNN for music auto-tagging

Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional lay...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2021-03, Vol.80 (8), p.11459-11469
Hauptverfasser: Yu, Yong-bin, Qi, Min-hui, Tang, Yi-fan, Deng, Quan-xin, Mai, Feng, Zhaxi, Nima
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 11469
container_issue 8
container_start_page 11459
container_title Multimedia tools and applications
container_volume 80
creator Yu, Yong-bin
Qi, Min-hui
Tang, Yi-fan
Deng, Quan-xin
Mai, Feng
Zhaxi, Nima
description Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional layer for extracting local feature and reducing temporal dimension, and residual block from WaveNet for preserving input resolution and extracting more complex features. In order to further improve performance, squeeze-and-excitation (SE) block is introduced to the residual block. Under the evaluation metric of Area Under Receiver Operating Characteristic Curve (AUC-ROC) score, experiment results on MagnaTagATune (MTAT) dataset show that the two proposed models achieve 91.47% and 92.76% respectively. Furthermore, our proposed models have slightly surpass the state-of-the-art model SampleCNN with SE block.
doi_str_mv 10.1007/s11042-020-10330-9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2513419510</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2513419510</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-9a86667b1ff59c0723766b2687eb326a4ba2e0b889ecbc10476514cd6d0d75c13</originalsourceid><addsrcrecordid>eNp9kE9LxDAQxYMouFa_gKeCNyE6kzRJc1zWv7CsFz2HNE3LLu12TVrBb2-0gjdPM4f33sz7EXKJcIMA6jYiQsEoMKAInAPVR2SBQnGqFMPjtPMSqBKAp-Qsxh0ASsGKBble5tH2h87Tzn_4Lr9bbTZ5M4S8n-LW5XYaBzratt3u23Ny0tgu-ovfmZG3h_vX1RNdvzw-r5Zr6piCkWpbSilVhU0jtAPFuJKyYrJUvuJM2qKyzENVltq7yqW3lRRYuFrWUCvhkGfkas49hOF98nE0u2EK-3TSMIG8QC1Sx4ywWeXCEGPwjTmEbW_Dp0Ew30zMzMQkJuaHidHJxGdTTOJ968Nf9D-uL4GEYR0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2513419510</pqid></control><display><type>article</type><title>A sample-level DCNN for music auto-tagging</title><source>SpringerLink Journals</source><creator>Yu, Yong-bin ; Qi, Min-hui ; Tang, Yi-fan ; Deng, Quan-xin ; Mai, Feng ; Zhaxi, Nima</creator><creatorcontrib>Yu, Yong-bin ; Qi, Min-hui ; Tang, Yi-fan ; Deng, Quan-xin ; Mai, Feng ; Zhaxi, Nima</creatorcontrib><description>Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional layer for extracting local feature and reducing temporal dimension, and residual block from WaveNet for preserving input resolution and extracting more complex features. In order to further improve performance, squeeze-and-excitation (SE) block is introduced to the residual block. Under the evaluation metric of Area Under Receiver Operating Characteristic Curve (AUC-ROC) score, experiment results on MagnaTagATune (MTAT) dataset show that the two proposed models achieve 91.47% and 92.76% respectively. Furthermore, our proposed models have slightly surpass the state-of-the-art model SampleCNN with SE block.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-020-10330-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial neural networks ; Audio signals ; Classification ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Datasets ; Feature extraction ; Marking ; Model testing ; Multimedia ; Multimedia Information Systems ; Music ; Neural networks ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2021-03, Vol.80 (8), p.11459-11469</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-9a86667b1ff59c0723766b2687eb326a4ba2e0b889ecbc10476514cd6d0d75c13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-020-10330-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-020-10330-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,41469,42538,51300</link.rule.ids></links><search><creatorcontrib>Yu, Yong-bin</creatorcontrib><creatorcontrib>Qi, Min-hui</creatorcontrib><creatorcontrib>Tang, Yi-fan</creatorcontrib><creatorcontrib>Deng, Quan-xin</creatorcontrib><creatorcontrib>Mai, Feng</creatorcontrib><creatorcontrib>Zhaxi, Nima</creatorcontrib><title>A sample-level DCNN for music auto-tagging</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional layer for extracting local feature and reducing temporal dimension, and residual block from WaveNet for preserving input resolution and extracting more complex features. In order to further improve performance, squeeze-and-excitation (SE) block is introduced to the residual block. Under the evaluation metric of Area Under Receiver Operating Characteristic Curve (AUC-ROC) score, experiment results on MagnaTagATune (MTAT) dataset show that the two proposed models achieve 91.47% and 92.76% respectively. Furthermore, our proposed models have slightly surpass the state-of-the-art model SampleCNN with SE block.</description><subject>Artificial neural networks</subject><subject>Audio signals</subject><subject>Classification</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Marking</subject><subject>Model testing</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Music</subject><subject>Neural networks</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE9LxDAQxYMouFa_gKeCNyE6kzRJc1zWv7CsFz2HNE3LLu12TVrBb2-0gjdPM4f33sz7EXKJcIMA6jYiQsEoMKAInAPVR2SBQnGqFMPjtPMSqBKAp-Qsxh0ASsGKBble5tH2h87Tzn_4Lr9bbTZ5M4S8n-LW5XYaBzratt3u23Ny0tgu-ovfmZG3h_vX1RNdvzw-r5Zr6piCkWpbSilVhU0jtAPFuJKyYrJUvuJM2qKyzENVltq7yqW3lRRYuFrWUCvhkGfkas49hOF98nE0u2EK-3TSMIG8QC1Sx4ywWeXCEGPwjTmEbW_Dp0Ew30zMzMQkJuaHidHJxGdTTOJ968Nf9D-uL4GEYR0</recordid><startdate>20210301</startdate><enddate>20210301</enddate><creator>Yu, Yong-bin</creator><creator>Qi, Min-hui</creator><creator>Tang, Yi-fan</creator><creator>Deng, Quan-xin</creator><creator>Mai, Feng</creator><creator>Zhaxi, Nima</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20210301</creationdate><title>A sample-level DCNN for music auto-tagging</title><author>Yu, Yong-bin ; Qi, Min-hui ; Tang, Yi-fan ; Deng, Quan-xin ; Mai, Feng ; Zhaxi, Nima</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-9a86667b1ff59c0723766b2687eb326a4ba2e0b889ecbc10476514cd6d0d75c13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Audio signals</topic><topic>Classification</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Marking</topic><topic>Model testing</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Music</topic><topic>Neural networks</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yu, Yong-bin</creatorcontrib><creatorcontrib>Qi, Min-hui</creatorcontrib><creatorcontrib>Tang, Yi-fan</creatorcontrib><creatorcontrib>Deng, Quan-xin</creatorcontrib><creatorcontrib>Mai, Feng</creatorcontrib><creatorcontrib>Zhaxi, Nima</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Yong-bin</au><au>Qi, Min-hui</au><au>Tang, Yi-fan</au><au>Deng, Quan-xin</au><au>Mai, Feng</au><au>Zhaxi, Nima</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A sample-level DCNN for music auto-tagging</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-03-01</date><risdate>2021</risdate><volume>80</volume><issue>8</issue><spage>11459</spage><epage>11469</epage><pages>11459-11469</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Deep convolutional neural networks (DCNNs) has been widely used in music auto-tagging which is a multi-label classification task that predicts tags of audio signals. This paper presents a sample-level DCNN for music auto-tagging. The proposed DCNN highlights two components: strided convolutional layer for extracting local feature and reducing temporal dimension, and residual block from WaveNet for preserving input resolution and extracting more complex features. In order to further improve performance, squeeze-and-excitation (SE) block is introduced to the residual block. Under the evaluation metric of Area Under Receiver Operating Characteristic Curve (AUC-ROC) score, experiment results on MagnaTagATune (MTAT) dataset show that the two proposed models achieve 91.47% and 92.76% respectively. Furthermore, our proposed models have slightly surpass the state-of-the-art model SampleCNN with SE block.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-020-10330-9</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2021-03, Vol.80 (8), p.11459-11469
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2513419510
source SpringerLink Journals
subjects Artificial neural networks
Audio signals
Classification
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Datasets
Feature extraction
Marking
Model testing
Multimedia
Multimedia Information Systems
Music
Neural networks
Special Purpose and Application-Based Systems
title A sample-level DCNN for music auto-tagging
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T14%3A54%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20sample-level%20DCNN%20for%20music%20auto-tagging&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Yu,%20Yong-bin&rft.date=2021-03-01&rft.volume=80&rft.issue=8&rft.spage=11459&rft.epage=11469&rft.pages=11459-11469&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-020-10330-9&rft_dat=%3Cproquest_cross%3E2513419510%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2513419510&rft_id=info:pmid/&rfr_iscdi=true