Acoustic model training using corrected terms

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Skobeltsyn, Gleb, Cherepanov, Evgeny A, Kapralova, Olga, Baeuml, Martin, Osmakov, Dmitry
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Skobeltsyn, Gleb
Cherepanov, Evgeny A
Kapralova, Olga
Baeuml, Martin
Osmakov, Dmitry
description Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11200887B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11200887B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11200887B23</originalsourceid><addsrcrecordid>eNrjZNB1TM4vLS7JTFbIzU9JzVEoKUrMzMvMS1coLQaRyflFRanJJakpCiWpRbnFPAysaYk5xam8UJqbQdHNNcTZQze1ID8-tbggMTk1L7UkPjTY0NDIwMDCwtzJyJgYNQDl3CqS</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Acoustic model training using corrected terms</title><source>esp@cenet</source><creator>Skobeltsyn, Gleb ; Cherepanov, Evgeny A ; Kapralova, Olga ; Baeuml, Martin ; Osmakov, Dmitry</creator><creatorcontrib>Skobeltsyn, Gleb ; Cherepanov, Evgeny A ; Kapralova, Olga ; Baeuml, Martin ; Osmakov, Dmitry</creatorcontrib><description>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211214&amp;DB=EPODOC&amp;CC=US&amp;NR=11200887B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76418</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20211214&amp;DB=EPODOC&amp;CC=US&amp;NR=11200887B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Skobeltsyn, Gleb</creatorcontrib><creatorcontrib>Cherepanov, Evgeny A</creatorcontrib><creatorcontrib>Kapralova, Olga</creatorcontrib><creatorcontrib>Baeuml, Martin</creatorcontrib><creatorcontrib>Osmakov, Dmitry</creatorcontrib><title>Acoustic model training using corrected terms</title><description>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNB1TM4vLS7JTFbIzU9JzVEoKUrMzMvMS1coLQaRyflFRanJJakpCiWpRbnFPAysaYk5xam8UJqbQdHNNcTZQze1ID8-tbggMTk1L7UkPjTY0NDIwMDCwtzJyJgYNQDl3CqS</recordid><startdate>20211214</startdate><enddate>20211214</enddate><creator>Skobeltsyn, Gleb</creator><creator>Cherepanov, Evgeny A</creator><creator>Kapralova, Olga</creator><creator>Baeuml, Martin</creator><creator>Osmakov, Dmitry</creator><scope>EVB</scope></search><sort><creationdate>20211214</creationdate><title>Acoustic model training using corrected terms</title><author>Skobeltsyn, Gleb ; Cherepanov, Evgeny A ; Kapralova, Olga ; Baeuml, Martin ; Osmakov, Dmitry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11200887B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Skobeltsyn, Gleb</creatorcontrib><creatorcontrib>Cherepanov, Evgeny A</creatorcontrib><creatorcontrib>Kapralova, Olga</creatorcontrib><creatorcontrib>Baeuml, Martin</creatorcontrib><creatorcontrib>Osmakov, Dmitry</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Skobeltsyn, Gleb</au><au>Cherepanov, Evgeny A</au><au>Kapralova, Olga</au><au>Baeuml, Martin</au><au>Osmakov, Dmitry</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Acoustic model training using corrected terms</title><date>2021-12-14</date><risdate>2021</risdate><abstract>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving first audio data corresponding to an utterance; obtaining a first transcription of the first audio data; receiving data indicating (i) a selection of one or more terms of the first transcription and (ii) one or more of replacement terms; determining that one or more of the replacement terms are classified as a correction of one or more of the selected terms; in response to determining that the one or more of the replacement terms are classified as a correction of the one or more of the selected terms, obtaining a first portion of the first audio data that corresponds to one or more terms of the first transcription; and using the first portion of the first audio data that is associated with the one or more terms of the first transcription to train an acoustic model for recognizing the one or more of the replacement terms.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11200887B2
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Acoustic model training using corrected terms
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T09%3A02%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Skobeltsyn,%20Gleb&rft.date=2021-12-14&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11200887B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true