Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences betwee...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in signal processing 2019-05, Vol.13 (2), p.206-219 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 219 |
---|---|
container_issue | 2 |
container_start_page | 206 |
container_title | IEEE journal of selected topics in signal processing |
container_volume | 13 |
creator | Purwins, Hendrik Li, Bo Virtanen, Tuomas Schluter, Jan Chang, Shuo-Yiin Sainath, Tara |
description | Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e., audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified. |
doi_str_mv | 10.1109/JSTSP.2019.2908700 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2227589350</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8678825</ieee_id><sourcerecordid>2227589350</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-2a2519310270ff6c5be49e16d53f7ebde2b688d881d05dc7a7d6e0685442beb03</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFb_gF4Cnjykzk7281hq_aJgIfW8JNlJSalN3W0P_nsTUzzNwLzPzPAwdsthwjnYx_d8lS8nCNxO0ILRAGdsxK3gKQgjzvs-w1RImV2yqxg3AFIrLkbs4YlonyyoCLtmt07qNiTTo2_aJG_Wu2KbLENbUYzd7Jpd1MU20s2pjtnn83w1e00XHy9vs-kirdDKQ4oFyu4YB9RQ16qSJQlLXHmZ1ZpKT1gqY7wx3IP0lS60VwTKSCGwpBKyMbsf9u5D-32keHCb9hi6X6JDRC2NzWSfwiFVhTbGQLXbh-arCD-Og-uVuD8lrlfiTko66G6AGiL6B4zSxqDMfgF2qVtI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2227589350</pqid></control><display><type>article</type><title>Deep Learning for Audio Signal Processing</title><source>IEEE Electronic Library (IEL)</source><creator>Purwins, Hendrik ; Li, Bo ; Virtanen, Tuomas ; Schluter, Jan ; Chang, Shuo-Yiin ; Sainath, Tara</creator><creatorcontrib>Purwins, Hendrik ; Li, Bo ; Virtanen, Tuomas ; Schluter, Jan ; Chang, Shuo-Yiin ; Sainath, Tara</creatorcontrib><description>Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e., audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.</description><identifier>ISSN: 1932-4553</identifier><identifier>EISSN: 1941-0484</identifier><identifier>DOI: 10.1109/JSTSP.2019.2908700</identifier><identifier>CODEN: IJSTGY</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Audio data ; audio enhancement ; Automatic speech recognition ; Background noise ; Computational modeling ; Computer architecture ; Computer memory ; connectionist temporal memory ; Convolution ; Deep learning ; Domains ; environmental sounds ; Hidden Markov models ; Information retrieval ; Localization ; Music ; music information retrieval ; Neural networks ; Short term memory ; Signal processing ; Sound processing ; Source separation ; Speech recognition ; State-of-the-art reviews ; Synthesis ; Task analysis ; Voice recognition</subject><ispartof>IEEE journal of selected topics in signal processing, 2019-05, Vol.13 (2), p.206-219</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-2a2519310270ff6c5be49e16d53f7ebde2b688d881d05dc7a7d6e0685442beb03</citedby><cites>FETCH-LOGICAL-c295t-2a2519310270ff6c5be49e16d53f7ebde2b688d881d05dc7a7d6e0685442beb03</cites><orcidid>0000-0002-4126-6556 ; 0000-0002-0053-215X ; 0000-0002-4604-9729 ; 0000-0002-6711-3603 ; 0000-0003-3862-6888</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8678825$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27926,27927,54760</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8678825$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Purwins, Hendrik</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Virtanen, Tuomas</creatorcontrib><creatorcontrib>Schluter, Jan</creatorcontrib><creatorcontrib>Chang, Shuo-Yiin</creatorcontrib><creatorcontrib>Sainath, Tara</creatorcontrib><title>Deep Learning for Audio Signal Processing</title><title>IEEE journal of selected topics in signal processing</title><addtitle>JSTSP</addtitle><description>Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e., audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.</description><subject>Artificial neural networks</subject><subject>Audio data</subject><subject>audio enhancement</subject><subject>Automatic speech recognition</subject><subject>Background noise</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Computer memory</subject><subject>connectionist temporal memory</subject><subject>Convolution</subject><subject>Deep learning</subject><subject>Domains</subject><subject>environmental sounds</subject><subject>Hidden Markov models</subject><subject>Information retrieval</subject><subject>Localization</subject><subject>Music</subject><subject>music information retrieval</subject><subject>Neural networks</subject><subject>Short term memory</subject><subject>Signal processing</subject><subject>Sound processing</subject><subject>Source separation</subject><subject>Speech recognition</subject><subject>State-of-the-art reviews</subject><subject>Synthesis</subject><subject>Task analysis</subject><subject>Voice recognition</subject><issn>1932-4553</issn><issn>1941-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1Lw0AQhhdRsFb_gF4Cnjykzk7281hq_aJgIfW8JNlJSalN3W0P_nsTUzzNwLzPzPAwdsthwjnYx_d8lS8nCNxO0ILRAGdsxK3gKQgjzvs-w1RImV2yqxg3AFIrLkbs4YlonyyoCLtmt07qNiTTo2_aJG_Wu2KbLENbUYzd7Jpd1MU20s2pjtnn83w1e00XHy9vs-kirdDKQ4oFyu4YB9RQ16qSJQlLXHmZ1ZpKT1gqY7wx3IP0lS60VwTKSCGwpBKyMbsf9u5D-32keHCb9hi6X6JDRC2NzWSfwiFVhTbGQLXbh-arCD-Og-uVuD8lrlfiTko66G6AGiL6B4zSxqDMfgF2qVtI</recordid><startdate>20190501</startdate><enddate>20190501</enddate><creator>Purwins, Hendrik</creator><creator>Li, Bo</creator><creator>Virtanen, Tuomas</creator><creator>Schluter, Jan</creator><creator>Chang, Shuo-Yiin</creator><creator>Sainath, Tara</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7T9</scope><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-4126-6556</orcidid><orcidid>https://orcid.org/0000-0002-0053-215X</orcidid><orcidid>https://orcid.org/0000-0002-4604-9729</orcidid><orcidid>https://orcid.org/0000-0002-6711-3603</orcidid><orcidid>https://orcid.org/0000-0003-3862-6888</orcidid></search><sort><creationdate>20190501</creationdate><title>Deep Learning for Audio Signal Processing</title><author>Purwins, Hendrik ; Li, Bo ; Virtanen, Tuomas ; Schluter, Jan ; Chang, Shuo-Yiin ; Sainath, Tara</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-2a2519310270ff6c5be49e16d53f7ebde2b688d881d05dc7a7d6e0685442beb03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Audio data</topic><topic>audio enhancement</topic><topic>Automatic speech recognition</topic><topic>Background noise</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Computer memory</topic><topic>connectionist temporal memory</topic><topic>Convolution</topic><topic>Deep learning</topic><topic>Domains</topic><topic>environmental sounds</topic><topic>Hidden Markov models</topic><topic>Information retrieval</topic><topic>Localization</topic><topic>Music</topic><topic>music information retrieval</topic><topic>Neural networks</topic><topic>Short term memory</topic><topic>Signal processing</topic><topic>Sound processing</topic><topic>Source separation</topic><topic>Speech recognition</topic><topic>State-of-the-art reviews</topic><topic>Synthesis</topic><topic>Task analysis</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Purwins, Hendrik</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Virtanen, Tuomas</creatorcontrib><creatorcontrib>Schluter, Jan</creatorcontrib><creatorcontrib>Chang, Shuo-Yiin</creatorcontrib><creatorcontrib>Sainath, Tara</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal of selected topics in signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Purwins, Hendrik</au><au>Li, Bo</au><au>Virtanen, Tuomas</au><au>Schluter, Jan</au><au>Chang, Shuo-Yiin</au><au>Sainath, Tara</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning for Audio Signal Processing</atitle><jtitle>IEEE journal of selected topics in signal processing</jtitle><stitle>JSTSP</stitle><date>2019-05-01</date><risdate>2019</risdate><volume>13</volume><issue>2</issue><spage>206</spage><epage>219</epage><pages>206-219</pages><issn>1932-4553</issn><eissn>1941-0484</eissn><coden>IJSTGY</coden><abstract>Given the recent surge in developments of deep learning, this paper provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e., audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSTSP.2019.2908700</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-4126-6556</orcidid><orcidid>https://orcid.org/0000-0002-0053-215X</orcidid><orcidid>https://orcid.org/0000-0002-4604-9729</orcidid><orcidid>https://orcid.org/0000-0002-6711-3603</orcidid><orcidid>https://orcid.org/0000-0003-3862-6888</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1932-4553 |
ispartof | IEEE journal of selected topics in signal processing, 2019-05, Vol.13 (2), p.206-219 |
issn | 1932-4553 1941-0484 |
language | eng |
recordid | cdi_proquest_journals_2227589350 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Audio data audio enhancement Automatic speech recognition Background noise Computational modeling Computer architecture Computer memory connectionist temporal memory Convolution Deep learning Domains environmental sounds Hidden Markov models Information retrieval Localization Music music information retrieval Neural networks Short term memory Signal processing Sound processing Source separation Speech recognition State-of-the-art reviews Synthesis Task analysis Voice recognition |
title | Deep Learning for Audio Signal Processing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T05%3A48%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20for%20Audio%20Signal%20Processing&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20signal%20processing&rft.au=Purwins,%20Hendrik&rft.date=2019-05-01&rft.volume=13&rft.issue=2&rft.spage=206&rft.epage=219&rft.pages=206-219&rft.issn=1932-4553&rft.eissn=1941-0484&rft.coden=IJSTGY&rft_id=info:doi/10.1109/JSTSP.2019.2908700&rft_dat=%3Cproquest_RIE%3E2227589350%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2227589350&rft_id=info:pmid/&rft_ieee_id=8678825&rfr_iscdi=true |