PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition

Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2020, Vol.28, p.2880-2894
Hauptverfasser: Kong, Qiuqiang, Cao, Yin, Iqbal, Turab, Wang, Yuxuan, Wang, Wenwu, Plumbley, Mark D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2894
container_issue
container_start_page 2880
container_title IEEE/ACM transactions on audio, speech, and language processing
container_volume 28
creator Kong, Qiuqiang
Cao, Yin
Iqbal, Turab
Wang, Yuxuan
Wang, Wenwu
Plumbley, Mark D.
description Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn .
doi_str_mv 10.1109/TASLP.2020.3030497
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2457976042</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9229505</ieee_id><sourcerecordid>2457976042</sourcerecordid><originalsourceid>FETCH-LOGICAL-c405t-96801ecb9f58e76f9616a6e824223bd8e791b602a093380658db89b202a5e87d3</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRS0EElXpD8AmEuuUsZ04GXZVxUuKSqBlbTnJpEopcbETIf6elBZW87p3RnMYu-Qw5RzwZjVbZvlUgICpBAkRJidsJKTAEIfq9C8XCOds4v0GADgkiEk0Yi_5bLHwt0Fm3JrCZWm2FOSOOmealqpg1leNDRbUO7MdQvdl3bsPauuOk9x0Hbk2eKXSrtuma2x7wc5qs_U0OcYxe7u_W80fw-z54Wk-y8IygrgLUaXAqSywjlNKVI2KK6MoFZEQsqiGHvJCgTCAUqag4rQqUiyGL01MaVLJMbs-7N05-9mT7_TG9q4dTmoRxQkmCiIxqMRBVTrrvaNa71zzYdy35qD39PQvPb2np4_0BtPVwdQQ0b8BhcAYYvkD2bFpVA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2457976042</pqid></control><display><type>article</type><title>PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition</title><source>IEEE Electronic Library (IEL)</source><creator>Kong, Qiuqiang ; Cao, Yin ; Iqbal, Turab ; Wang, Yuxuan ; Wang, Wenwu ; Plumbley, Mark D.</creator><creatorcontrib>Kong, Qiuqiang ; Cao, Yin ; Iqbal, Turab ; Wang, Yuxuan ; Wang, Wenwu ; Plumbley, Mark D.</creatorcontrib><description>Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn .</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2020.3030497</identifier><identifier>CODEN: ITASFA</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Acoustics ; Artificial neural networks ; Audio tagging ; Classification ; Computer vision ; Convolution ; Datasets ; Machine learning ; Marking ; Music ; Natural language processing ; Neural networks ; Pattern recognition ; pretrained audio neural networks ; Source code ; Tagging ; Task analysis ; Task complexity ; Training ; transfer learning ; Waveforms</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.2880-2894</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c405t-96801ecb9f58e76f9616a6e824223bd8e791b602a093380658db89b202a5e87d3</citedby><cites>FETCH-LOGICAL-c405t-96801ecb9f58e76f9616a6e824223bd8e791b602a093380658db89b202a5e87d3</cites><orcidid>0000-0003-3393-2544 ; 0000-0002-8393-5703 ; 0000-0002-9708-1075 ; 0000-0003-2864-0475</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9229505$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9229505$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kong, Qiuqiang</creatorcontrib><creatorcontrib>Cao, Yin</creatorcontrib><creatorcontrib>Iqbal, Turab</creatorcontrib><creatorcontrib>Wang, Yuxuan</creatorcontrib><creatorcontrib>Wang, Wenwu</creatorcontrib><creatorcontrib>Plumbley, Mark D.</creatorcontrib><title>PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn .</description><subject>Acoustics</subject><subject>Artificial neural networks</subject><subject>Audio tagging</subject><subject>Classification</subject><subject>Computer vision</subject><subject>Convolution</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Marking</subject><subject>Music</subject><subject>Natural language processing</subject><subject>Neural networks</subject><subject>Pattern recognition</subject><subject>pretrained audio neural networks</subject><subject>Source code</subject><subject>Tagging</subject><subject>Task analysis</subject><subject>Task complexity</subject><subject>Training</subject><subject>transfer learning</subject><subject>Waveforms</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRS0EElXpD8AmEuuUsZ04GXZVxUuKSqBlbTnJpEopcbETIf6elBZW87p3RnMYu-Qw5RzwZjVbZvlUgICpBAkRJidsJKTAEIfq9C8XCOds4v0GADgkiEk0Yi_5bLHwt0Fm3JrCZWm2FOSOOmealqpg1leNDRbUO7MdQvdl3bsPauuOk9x0Hbk2eKXSrtuma2x7wc5qs_U0OcYxe7u_W80fw-z54Wk-y8IygrgLUaXAqSywjlNKVI2KK6MoFZEQsqiGHvJCgTCAUqag4rQqUiyGL01MaVLJMbs-7N05-9mT7_TG9q4dTmoRxQkmCiIxqMRBVTrrvaNa71zzYdy35qD39PQvPb2np4_0BtPVwdQQ0b8BhcAYYvkD2bFpVA</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Kong, Qiuqiang</creator><creator>Cao, Yin</creator><creator>Iqbal, Turab</creator><creator>Wang, Yuxuan</creator><creator>Wang, Wenwu</creator><creator>Plumbley, Mark D.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-3393-2544</orcidid><orcidid>https://orcid.org/0000-0002-8393-5703</orcidid><orcidid>https://orcid.org/0000-0002-9708-1075</orcidid><orcidid>https://orcid.org/0000-0003-2864-0475</orcidid></search><sort><creationdate>2020</creationdate><title>PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition</title><author>Kong, Qiuqiang ; Cao, Yin ; Iqbal, Turab ; Wang, Yuxuan ; Wang, Wenwu ; Plumbley, Mark D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c405t-96801ecb9f58e76f9616a6e824223bd8e791b602a093380658db89b202a5e87d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acoustics</topic><topic>Artificial neural networks</topic><topic>Audio tagging</topic><topic>Classification</topic><topic>Computer vision</topic><topic>Convolution</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Marking</topic><topic>Music</topic><topic>Natural language processing</topic><topic>Neural networks</topic><topic>Pattern recognition</topic><topic>pretrained audio neural networks</topic><topic>Source code</topic><topic>Tagging</topic><topic>Task analysis</topic><topic>Task complexity</topic><topic>Training</topic><topic>transfer learning</topic><topic>Waveforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kong, Qiuqiang</creatorcontrib><creatorcontrib>Cao, Yin</creatorcontrib><creatorcontrib>Iqbal, Turab</creatorcontrib><creatorcontrib>Wang, Yuxuan</creatorcontrib><creatorcontrib>Wang, Wenwu</creatorcontrib><creatorcontrib>Plumbley, Mark D.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kong, Qiuqiang</au><au>Cao, Yin</au><au>Iqbal, Turab</au><au>Wang, Yuxuan</au><au>Wang, Wenwu</au><au>Plumbley, Mark D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2020</date><risdate>2020</risdate><volume>28</volume><spage>2880</spage><epage>2894</epage><pages>2880-2894</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASFA</coden><abstract>Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn .</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2020.3030497</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-3393-2544</orcidid><orcidid>https://orcid.org/0000-0002-8393-5703</orcidid><orcidid>https://orcid.org/0000-0002-9708-1075</orcidid><orcidid>https://orcid.org/0000-0003-2864-0475</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2329-9290
ispartof IEEE/ACM transactions on audio, speech, and language processing, 2020, Vol.28, p.2880-2894
issn 2329-9290
2329-9304
language eng
recordid cdi_proquest_journals_2457976042
source IEEE Electronic Library (IEL)
subjects Acoustics
Artificial neural networks
Audio tagging
Classification
Computer vision
Convolution
Datasets
Machine learning
Marking
Music
Natural language processing
Neural networks
Pattern recognition
pretrained audio neural networks
Source code
Tagging
Task analysis
Task complexity
Training
transfer learning
Waveforms
title PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T13%3A43%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PANNs:%20Large-Scale%20Pretrained%20Audio%20Neural%20Networks%20for%20Audio%20Pattern%20Recognition&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Kong,%20Qiuqiang&rft.date=2020&rft.volume=28&rft.spage=2880&rft.epage=2894&rft.pages=2880-2894&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASFA&rft_id=info:doi/10.1109/TASLP.2020.3030497&rft_dat=%3Cproquest_RIE%3E2457976042%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2457976042&rft_id=info:pmid/&rft_ieee_id=9229505&rfr_iscdi=true