End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes

The end-to-end (E2E) automatic speech recognition (ASR) systems are often required to operate in reverberant conditions, where the long-term sub-band envelopes of the speech are temporally smeared. In this paper, we develop a feature enhancement approach using a neural model operating on sub-band te...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-02
Hauptverfasser: Kumar, Rohit, Purushothaman, Anurenjan, Sreeram, Anirudh, Ganapathy, Sriram
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Kumar, Rohit
Purushothaman, Anurenjan
Sreeram, Anirudh
Ganapathy, Sriram
description The end-to-end (E2E) automatic speech recognition (ASR) systems are often required to operate in reverberant conditions, where the long-term sub-band envelopes of the speech are temporally smeared. In this paper, we develop a feature enhancement approach using a neural model operating on sub-band temporal envelopes. The temporal envelopes are modeled using the framework of frequency domain linear prediction (FDLP). The neural enhancement model proposed in this paper performs an envelope gain based enhancement of temporal envelopes. The model architecture consists of a combination of convolutional and long short term memory (LSTM) neural network layers. Further, the envelope dereverberation, feature extraction and acoustic modeling using transformer based E2E ASR can all be jointly optimized for the speech recognition task. The joint optimization ensures that the dereverberation model targets the ASR cost function. We perform E2E speech recognition experiments on the REVERB challenge dataset as well as on the VOiCES dataset. In these experiments, the proposed joint modeling approach yields significant improvements compared to the baseline E2E ASR system (average relative improvements of 21% on the REVERB challenge dataset and about 10% on the VOiCES dataset).
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2559946624</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2559946624</sourcerecordid><originalsourceid>FETCH-proquest_journals_25599466243</originalsourceid><addsrcrecordid>eNqNjssKgkAYRocgSMp3GGg9YOOlXHYxok2QQasQtV8difltLj5_Q_QArc7ifAe-CfF4GK7YJuJ8Rnyt-yAIeLLmcRx65JHJJzPIHGg-ANQdvUKNrRRGoKR3YTp6RiENPYCCEVQFqvyqS0NzW7Fd6cqtNaigVaC1GIFmcoQXDqAXZNqULw3-j3OyPGa3_YkNCt8WtCl6tEo6Vbg7aRolCY_C_1YftmdDeQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2559946624</pqid></control><display><type>article</type><title>End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes</title><source>Free E- Journals</source><creator>Kumar, Rohit ; Purushothaman, Anurenjan ; Sreeram, Anirudh ; Ganapathy, Sriram</creator><creatorcontrib>Kumar, Rohit ; Purushothaman, Anurenjan ; Sreeram, Anirudh ; Ganapathy, Sriram</creatorcontrib><description>The end-to-end (E2E) automatic speech recognition (ASR) systems are often required to operate in reverberant conditions, where the long-term sub-band envelopes of the speech are temporally smeared. In this paper, we develop a feature enhancement approach using a neural model operating on sub-band temporal envelopes. The temporal envelopes are modeled using the framework of frequency domain linear prediction (FDLP). The neural enhancement model proposed in this paper performs an envelope gain based enhancement of temporal envelopes. The model architecture consists of a combination of convolutional and long short term memory (LSTM) neural network layers. Further, the envelope dereverberation, feature extraction and acoustic modeling using transformer based E2E ASR can all be jointly optimized for the speech recognition task. The joint optimization ensures that the dereverberation model targets the ASR cost function. We perform E2E speech recognition experiments on the REVERB challenge dataset as well as on the VOiCES dataset. In these experiments, the proposed joint modeling approach yields significant improvements compared to the baseline E2E ASR system (average relative improvements of 21% on the REVERB challenge dataset and about 10% on the VOiCES dataset).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Automatic speech recognition ; Cost function ; Datasets ; Envelopes ; Feature extraction ; Linear prediction ; Modelling ; Neural networks ; Optimization ; Speech ; Voice recognition</subject><ispartof>arXiv.org, 2022-02</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Kumar, Rohit</creatorcontrib><creatorcontrib>Purushothaman, Anurenjan</creatorcontrib><creatorcontrib>Sreeram, Anirudh</creatorcontrib><creatorcontrib>Ganapathy, Sriram</creatorcontrib><title>End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes</title><title>arXiv.org</title><description>The end-to-end (E2E) automatic speech recognition (ASR) systems are often required to operate in reverberant conditions, where the long-term sub-band envelopes of the speech are temporally smeared. In this paper, we develop a feature enhancement approach using a neural model operating on sub-band temporal envelopes. The temporal envelopes are modeled using the framework of frequency domain linear prediction (FDLP). The neural enhancement model proposed in this paper performs an envelope gain based enhancement of temporal envelopes. The model architecture consists of a combination of convolutional and long short term memory (LSTM) neural network layers. Further, the envelope dereverberation, feature extraction and acoustic modeling using transformer based E2E ASR can all be jointly optimized for the speech recognition task. The joint optimization ensures that the dereverberation model targets the ASR cost function. We perform E2E speech recognition experiments on the REVERB challenge dataset as well as on the VOiCES dataset. In these experiments, the proposed joint modeling approach yields significant improvements compared to the baseline E2E ASR system (average relative improvements of 21% on the REVERB challenge dataset and about 10% on the VOiCES dataset).</description><subject>Automatic speech recognition</subject><subject>Cost function</subject><subject>Datasets</subject><subject>Envelopes</subject><subject>Feature extraction</subject><subject>Linear prediction</subject><subject>Modelling</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Speech</subject><subject>Voice recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjssKgkAYRocgSMp3GGg9YOOlXHYxok2QQasQtV8difltLj5_Q_QArc7ifAe-CfF4GK7YJuJ8Rnyt-yAIeLLmcRx65JHJJzPIHGg-ANQdvUKNrRRGoKR3YTp6RiENPYCCEVQFqvyqS0NzW7Fd6cqtNaigVaC1GIFmcoQXDqAXZNqULw3-j3OyPGa3_YkNCt8WtCl6tEo6Vbg7aRolCY_C_1YftmdDeQ</recordid><startdate>20220218</startdate><enddate>20220218</enddate><creator>Kumar, Rohit</creator><creator>Purushothaman, Anurenjan</creator><creator>Sreeram, Anirudh</creator><creator>Ganapathy, Sriram</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220218</creationdate><title>End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes</title><author>Kumar, Rohit ; Purushothaman, Anurenjan ; Sreeram, Anirudh ; Ganapathy, Sriram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25599466243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Automatic speech recognition</topic><topic>Cost function</topic><topic>Datasets</topic><topic>Envelopes</topic><topic>Feature extraction</topic><topic>Linear prediction</topic><topic>Modelling</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Speech</topic><topic>Voice recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kumar, Rohit</creatorcontrib><creatorcontrib>Purushothaman, Anurenjan</creatorcontrib><creatorcontrib>Sreeram, Anirudh</creatorcontrib><creatorcontrib>Ganapathy, Sriram</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kumar, Rohit</au><au>Purushothaman, Anurenjan</au><au>Sreeram, Anirudh</au><au>Ganapathy, Sriram</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes</atitle><jtitle>arXiv.org</jtitle><date>2022-02-18</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The end-to-end (E2E) automatic speech recognition (ASR) systems are often required to operate in reverberant conditions, where the long-term sub-band envelopes of the speech are temporally smeared. In this paper, we develop a feature enhancement approach using a neural model operating on sub-band temporal envelopes. The temporal envelopes are modeled using the framework of frequency domain linear prediction (FDLP). The neural enhancement model proposed in this paper performs an envelope gain based enhancement of temporal envelopes. The model architecture consists of a combination of convolutional and long short term memory (LSTM) neural network layers. Further, the envelope dereverberation, feature extraction and acoustic modeling using transformer based E2E ASR can all be jointly optimized for the speech recognition task. The joint optimization ensures that the dereverberation model targets the ASR cost function. We perform E2E speech recognition experiments on the REVERB challenge dataset as well as on the VOiCES dataset. In these experiments, the proposed joint modeling approach yields significant improvements compared to the baseline E2E ASR system (average relative improvements of 21% on the REVERB challenge dataset and about 10% on the VOiCES dataset).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2559946624
source Free E- Journals
subjects Automatic speech recognition
Cost function
Datasets
Envelopes
Feature extraction
Linear prediction
Modelling
Neural networks
Optimization
Speech
Voice recognition
title End-to-End Speech Recognition With Joint Dereverberation Of Sub-Band Autoregressive Envelopes
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T04%3A17%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=End-to-End%20Speech%20Recognition%20With%20Joint%20Dereverberation%20Of%20Sub-Band%20Autoregressive%20Envelopes&rft.jtitle=arXiv.org&rft.au=Kumar,%20Rohit&rft.date=2022-02-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2559946624%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2559946624&rft_id=info:pmid/&rfr_iscdi=true