Exploring RNN-Transducer for Chinese Speech Recognition
End-to-end approaches have drawn much attention recently for significantly simplifying the construction of an automatic speech recognition (ASR) system. RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous studies have shown that RNN-T is difficult to train and a very complex tr...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wang, Senmao Zhou, Pan Chen, Wei Jia, Jia Xie, Lei |
description | End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus. |
doi_str_mv | 10.48550/arxiv.1811.05097 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1811_05097</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1811_05097</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-c7b230066a512302420a8332367c8f57a30d4aafbc42518387523f7446121b973</originalsourceid><addsrcrecordid>eNotj71OwzAURr10QIUHYMIvkNT2tX3dEUXlR6qKVLJHN67dWipO5NCqvD1QmM43HX2HsXspau2MEQsql3SupZOyFkYs8Ybh6jIeh5Lynm83m6otlKfdyYfC41B4c0g5TIG_jyH4A98GP-xz-kxDvmWzSMcp3P1zztqnVdu8VOu359fmcV2RRaw89gqEsJaM_BlKK0EOQIFF76JBArHTRLH3WhnpwKFREFFrK5Xslwhz9vCnvT7vxpI-qHx1vwXdtQC-AaLwPmU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Exploring RNN-Transducer for Chinese Speech Recognition</title><source>arXiv.org</source><creator>Wang, Senmao ; Zhou, Pan ; Chen, Wei ; Jia, Jia ; Xie, Lei</creator><creatorcontrib>Wang, Senmao ; Zhou, Pan ; Chen, Wei ; Jia, Jia ; Xie, Lei</creatorcontrib><description>End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus.</description><identifier>DOI: 10.48550/arxiv.1811.05097</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2018-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1811.05097$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1811.05097$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Senmao</creatorcontrib><creatorcontrib>Zhou, Pan</creatorcontrib><creatorcontrib>Chen, Wei</creatorcontrib><creatorcontrib>Jia, Jia</creatorcontrib><creatorcontrib>Xie, Lei</creatorcontrib><title>Exploring RNN-Transducer for Chinese Speech Recognition</title><description>End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr10QIUHYMIvkNT2tX3dEUXlR6qKVLJHN67dWipO5NCqvD1QmM43HX2HsXspau2MEQsql3SupZOyFkYs8Ybh6jIeh5Lynm83m6otlKfdyYfC41B4c0g5TIG_jyH4A98GP-xz-kxDvmWzSMcp3P1zztqnVdu8VOu359fmcV2RRaw89gqEsJaM_BlKK0EOQIFF76JBArHTRLH3WhnpwKFREFFrK5Xslwhz9vCnvT7vxpI-qHx1vwXdtQC-AaLwPmU</recordid><startdate>20181112</startdate><enddate>20181112</enddate><creator>Wang, Senmao</creator><creator>Zhou, Pan</creator><creator>Chen, Wei</creator><creator>Jia, Jia</creator><creator>Xie, Lei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20181112</creationdate><title>Exploring RNN-Transducer for Chinese Speech Recognition</title><author>Wang, Senmao ; Zhou, Pan ; Chen, Wei ; Jia, Jia ; Xie, Lei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-c7b230066a512302420a8332367c8f57a30d4aafbc42518387523f7446121b973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Senmao</creatorcontrib><creatorcontrib>Zhou, Pan</creatorcontrib><creatorcontrib>Chen, Wei</creatorcontrib><creatorcontrib>Jia, Jia</creatorcontrib><creatorcontrib>Xie, Lei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Senmao</au><au>Zhou, Pan</au><au>Chen, Wei</au><au>Jia, Jia</au><au>Xie, Lei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exploring RNN-Transducer for Chinese Speech Recognition</atitle><date>2018-11-12</date><risdate>2018</risdate><abstract>End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus.</abstract><doi>10.48550/arxiv.1811.05097</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1811.05097 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1811_05097 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Computer Science - Sound |
title | Exploring RNN-Transducer for Chinese Speech Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T16%3A05%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exploring%20RNN-Transducer%20for%20Chinese%20Speech%20Recognition&rft.au=Wang,%20Senmao&rft.date=2018-11-12&rft_id=info:doi/10.48550/arxiv.1811.05097&rft_dat=%3Carxiv_GOX%3E1811_05097%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |