Hybrid Autoregressive Transducer (hat)
This paper proposes and evaluates the hybrid autoregressive transducer (HAT) model, a time-synchronous encoderdecoder model that preserves the modularity of conventional automatic speech recognition systems. The HAT model provides a way to measure the quality of the internal language model that can...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Variani, Ehsan Rybach, David Allauzen, Cyril Riley, Michael |
description | This paper proposes and evaluates the hybrid autoregressive transducer (HAT)
model, a time-synchronous encoderdecoder model that preserves the modularity of
conventional automatic speech recognition systems. The HAT model provides a way
to measure the quality of the internal language model that can be used to
decide whether inference with an external language model is beneficial or not.
This article also presents a finite context version of the HAT model that
addresses the exposure bias problem and significantly simplifies the overall
training and inference. We evaluate our proposed model on a large-scale voice
search task. Our experiments show significant improvements in WER compared to
the state-of-the-art approaches. |
doi_str_mv | 10.48550/arxiv.2003.07705 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2003_07705</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2003_07705</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-2357c3ca38426e78736b069ce82ee8e18d267cee0d9d2572ae71b84a9197d8f53</originalsourceid><addsrcrecordid>eNotzrsOgjAUgOEuDkZ9ACeZjA5gaWlPGQ3xlpi4sJNDe1QSbylK5O2N6PRvfz7GxjGPEqMUX6B_V00kOJcRB-Cqz6bbtvSVC5av593TyVNdVw0Fucdb7V6WfDA743M-ZL0jXmoa_Ttg-XqVZ9twf9jssuU-RA0qFFKBlRalSYQmMCB1yXVqyQgiQ7FxQoMl4i51QoFAgrg0CaZxCs4clRywyW_bQYuHr67o2-ILLjqw_AAdYTmn</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Hybrid Autoregressive Transducer (hat)</title><source>arXiv.org</source><creator>Variani, Ehsan ; Rybach, David ; Allauzen, Cyril ; Riley, Michael</creator><creatorcontrib>Variani, Ehsan ; Rybach, David ; Allauzen, Cyril ; Riley, Michael</creatorcontrib><description>This paper proposes and evaluates the hybrid autoregressive transducer (HAT)
model, a time-synchronous encoderdecoder model that preserves the modularity of
conventional automatic speech recognition systems. The HAT model provides a way
to measure the quality of the internal language model that can be used to
decide whether inference with an external language model is beneficial or not.
This article also presents a finite context version of the HAT model that
addresses the exposure bias problem and significantly simplifies the overall
training and inference. We evaluate our proposed model on a large-scale voice
search task. Our experiments show significant improvements in WER compared to
the state-of-the-art approaches.</description><identifier>DOI: 10.48550/arxiv.2003.07705</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2020-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2003.07705$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2003.07705$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Variani, Ehsan</creatorcontrib><creatorcontrib>Rybach, David</creatorcontrib><creatorcontrib>Allauzen, Cyril</creatorcontrib><creatorcontrib>Riley, Michael</creatorcontrib><title>Hybrid Autoregressive Transducer (hat)</title><description>This paper proposes and evaluates the hybrid autoregressive transducer (HAT)
model, a time-synchronous encoderdecoder model that preserves the modularity of
conventional automatic speech recognition systems. The HAT model provides a way
to measure the quality of the internal language model that can be used to
decide whether inference with an external language model is beneficial or not.
This article also presents a finite context version of the HAT model that
addresses the exposure bias problem and significantly simplifies the overall
training and inference. We evaluate our proposed model on a large-scale voice
search task. Our experiments show significant improvements in WER compared to
the state-of-the-art approaches.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsOgjAUgOEuDkZ9ACeZjA5gaWlPGQ3xlpi4sJNDe1QSbylK5O2N6PRvfz7GxjGPEqMUX6B_V00kOJcRB-Cqz6bbtvSVC5av593TyVNdVw0Fucdb7V6WfDA743M-ZL0jXmoa_Ttg-XqVZ9twf9jssuU-RA0qFFKBlRalSYQmMCB1yXVqyQgiQ7FxQoMl4i51QoFAgrg0CaZxCs4clRywyW_bQYuHr67o2-ILLjqw_AAdYTmn</recordid><startdate>20200312</startdate><enddate>20200312</enddate><creator>Variani, Ehsan</creator><creator>Rybach, David</creator><creator>Allauzen, Cyril</creator><creator>Riley, Michael</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200312</creationdate><title>Hybrid Autoregressive Transducer (hat)</title><author>Variani, Ehsan ; Rybach, David ; Allauzen, Cyril ; Riley, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-2357c3ca38426e78736b069ce82ee8e18d267cee0d9d2572ae71b84a9197d8f53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Variani, Ehsan</creatorcontrib><creatorcontrib>Rybach, David</creatorcontrib><creatorcontrib>Allauzen, Cyril</creatorcontrib><creatorcontrib>Riley, Michael</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Variani, Ehsan</au><au>Rybach, David</au><au>Allauzen, Cyril</au><au>Riley, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hybrid Autoregressive Transducer (hat)</atitle><date>2020-03-12</date><risdate>2020</risdate><abstract>This paper proposes and evaluates the hybrid autoregressive transducer (HAT)
model, a time-synchronous encoderdecoder model that preserves the modularity of
conventional automatic speech recognition systems. The HAT model provides a way
to measure the quality of the internal language model that can be used to
decide whether inference with an external language model is beneficial or not.
This article also presents a finite context version of the HAT model that
addresses the exposure bias problem and significantly simplifies the overall
training and inference. We evaluate our proposed model on a large-scale voice
search task. Our experiments show significant improvements in WER compared to
the state-of-the-art approaches.</abstract><doi>10.48550/arxiv.2003.07705</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2003.07705 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2003_07705 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Computer Science - Sound |
title | Hybrid Autoregressive Transducer (hat) |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T14%3A24%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hybrid%20Autoregressive%20Transducer%20(hat)&rft.au=Variani,%20Ehsan&rft.date=2020-03-12&rft_id=info:doi/10.48550/arxiv.2003.07705&rft_dat=%3Carxiv_GOX%3E2003_07705%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |