Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models
Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4337 |
---|---|
container_issue | |
container_start_page | 4334 |
container_title | |
container_volume | |
creator | Burget, L Schwarz, P Agarwal, M Akyazi, P Kai Feng Ghoshal, A Glembek, O Goel, N Karafiát, M Povey, D Rastrow, A Rose, R C Thomas, S |
description | Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of in-language training data. |
doi_str_mv | 10.1109/ICASSP.2010.5495646 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5495646</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5495646</ieee_id><sourcerecordid>5495646</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-33b4b2a33fa539442e989794c765337579be3ca2fafd8e3976533e5f74fc32923</originalsourceid><addsrcrecordid>eNpVkN1KAzEQRuMfWGufoDd5ga1JJtnsXErRKrQoVMG7kk0nNbLdLZtd0Ld3q9549Q1n4DDzMTaVYialwJvH-e16_TxTYgBGo8l1fsImaAupldZaYZ6fspECi5lE8Xb2b2fwnI2kUSLLpcZLdpXShxCisLoYsc2qr7pYxXrXu4o73_Spi57vmy0dIQ9Ny9OByL_zlnyzq2MXm5qXLtGWD0Pqy3RwnvjC9SlFV_NV_Oz6lvjqqEjX7CK4KtHkL8fs9f7uZf6QLZ8Ww1fLLEprugyg1KVyAMEZwOFqwgItam9zA2CNxZLAOxVc2BYE-IPJBKuDB4UKxmz6641EtDm0ce_ar81fV_ANOsVaiA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Burget, L ; Schwarz, P ; Agarwal, M ; Akyazi, P ; Kai Feng ; Ghoshal, A ; Glembek, O ; Goel, N ; Karafiát, M ; Povey, D ; Rastrow, A ; Rose, R C ; Thomas, S</creator><creatorcontrib>Burget, L ; Schwarz, P ; Agarwal, M ; Akyazi, P ; Kai Feng ; Ghoshal, A ; Glembek, O ; Goel, N ; Karafiát, M ; Povey, D ; Rastrow, A ; Rose, R C ; Thomas, S</creatorcontrib><description>Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of in-language training data.</description><identifier>ISSN: 1520-6149</identifier><identifier>ISBN: 9781424442959</identifier><identifier>ISBN: 1424442958</identifier><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9781424442966</identifier><identifier>EISBN: 1424442966</identifier><identifier>DOI: 10.1109/ICASSP.2010.5495646</identifier><language>eng</language><publisher>IEEE</publisher><subject>Automatic speech recognition ; Availability ; Hidden Markov models ; Humans ; Large vocabulary speech recognition ; Multilingual acoustic modeling ; Natural languages ; Robustness ; Space technology ; Speech recognition ; Subspace constraints ; Subspace Gaussian mixture model ; Training data</subject><ispartof>2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010, p.4334-4337</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5495646$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5495646$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Burget, L</creatorcontrib><creatorcontrib>Schwarz, P</creatorcontrib><creatorcontrib>Agarwal, M</creatorcontrib><creatorcontrib>Akyazi, P</creatorcontrib><creatorcontrib>Kai Feng</creatorcontrib><creatorcontrib>Ghoshal, A</creatorcontrib><creatorcontrib>Glembek, O</creatorcontrib><creatorcontrib>Goel, N</creatorcontrib><creatorcontrib>Karafiát, M</creatorcontrib><creatorcontrib>Povey, D</creatorcontrib><creatorcontrib>Rastrow, A</creatorcontrib><creatorcontrib>Rose, R C</creatorcontrib><creatorcontrib>Thomas, S</creatorcontrib><title>Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models</title><title>2010 IEEE International Conference on Acoustics, Speech and Signal Processing</title><addtitle>ICASSP</addtitle><description>Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of in-language training data.</description><subject>Automatic speech recognition</subject><subject>Availability</subject><subject>Hidden Markov models</subject><subject>Humans</subject><subject>Large vocabulary speech recognition</subject><subject>Multilingual acoustic modeling</subject><subject>Natural languages</subject><subject>Robustness</subject><subject>Space technology</subject><subject>Speech recognition</subject><subject>Subspace constraints</subject><subject>Subspace Gaussian mixture model</subject><subject>Training data</subject><issn>1520-6149</issn><issn>2379-190X</issn><isbn>9781424442959</isbn><isbn>1424442958</isbn><isbn>9781424442966</isbn><isbn>1424442966</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2010</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpVkN1KAzEQRuMfWGufoDd5ga1JJtnsXErRKrQoVMG7kk0nNbLdLZtd0Ld3q9549Q1n4DDzMTaVYialwJvH-e16_TxTYgBGo8l1fsImaAupldZaYZ6fspECi5lE8Xb2b2fwnI2kUSLLpcZLdpXShxCisLoYsc2qr7pYxXrXu4o73_Spi57vmy0dIQ9Ny9OByL_zlnyzq2MXm5qXLtGWD0Pqy3RwnvjC9SlFV_NV_Oz6lvjqqEjX7CK4KtHkL8fs9f7uZf6QLZ8Ww1fLLEprugyg1KVyAMEZwOFqwgItam9zA2CNxZLAOxVc2BYE-IPJBKuDB4UKxmz6641EtDm0ce_ar81fV_ANOsVaiA</recordid><startdate>201003</startdate><enddate>201003</enddate><creator>Burget, L</creator><creator>Schwarz, P</creator><creator>Agarwal, M</creator><creator>Akyazi, P</creator><creator>Kai Feng</creator><creator>Ghoshal, A</creator><creator>Glembek, O</creator><creator>Goel, N</creator><creator>Karafiát, M</creator><creator>Povey, D</creator><creator>Rastrow, A</creator><creator>Rose, R C</creator><creator>Thomas, S</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201003</creationdate><title>Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models</title><author>Burget, L ; Schwarz, P ; Agarwal, M ; Akyazi, P ; Kai Feng ; Ghoshal, A ; Glembek, O ; Goel, N ; Karafiát, M ; Povey, D ; Rastrow, A ; Rose, R C ; Thomas, S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-33b4b2a33fa539442e989794c765337579be3ca2fafd8e3976533e5f74fc32923</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Automatic speech recognition</topic><topic>Availability</topic><topic>Hidden Markov models</topic><topic>Humans</topic><topic>Large vocabulary speech recognition</topic><topic>Multilingual acoustic modeling</topic><topic>Natural languages</topic><topic>Robustness</topic><topic>Space technology</topic><topic>Speech recognition</topic><topic>Subspace constraints</topic><topic>Subspace Gaussian mixture model</topic><topic>Training data</topic><toplevel>online_resources</toplevel><creatorcontrib>Burget, L</creatorcontrib><creatorcontrib>Schwarz, P</creatorcontrib><creatorcontrib>Agarwal, M</creatorcontrib><creatorcontrib>Akyazi, P</creatorcontrib><creatorcontrib>Kai Feng</creatorcontrib><creatorcontrib>Ghoshal, A</creatorcontrib><creatorcontrib>Glembek, O</creatorcontrib><creatorcontrib>Goel, N</creatorcontrib><creatorcontrib>Karafiát, M</creatorcontrib><creatorcontrib>Povey, D</creatorcontrib><creatorcontrib>Rastrow, A</creatorcontrib><creatorcontrib>Rose, R C</creatorcontrib><creatorcontrib>Thomas, S</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Burget, L</au><au>Schwarz, P</au><au>Agarwal, M</au><au>Akyazi, P</au><au>Kai Feng</au><au>Ghoshal, A</au><au>Glembek, O</au><au>Goel, N</au><au>Karafiát, M</au><au>Povey, D</au><au>Rastrow, A</au><au>Rose, R C</au><au>Thomas, S</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models</atitle><btitle>2010 IEEE International Conference on Acoustics, Speech and Signal Processing</btitle><stitle>ICASSP</stitle><date>2010-03</date><risdate>2010</risdate><spage>4334</spage><epage>4337</epage><pages>4334-4337</pages><issn>1520-6149</issn><eissn>2379-190X</eissn><isbn>9781424442959</isbn><isbn>1424442958</isbn><eisbn>9781424442966</eisbn><eisbn>1424442966</eisbn><abstract>Although research has previously been done on multilingual speech recognition, it has been found to be very difficult to improve over separately trained systems. The usual approach has been to use some kind of "universal phone set" that covers multiple languages. We report experiments on a different approach to multilingual speech recognition, in which the phone sets are entirely distinct but the model has parameters not tied to specific states that are shared across languages. We use a model called a "Subspace Gaussian Mixture Model" where states' distributions are Gaussian Mixture Models with a common structure, constrained to lie in a subspace of the total parameter space. The parameters that define this subspace can be shared across languages. We obtain substantial WER improvements with this approach, especially with very small amounts of in-language training data.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP.2010.5495646</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1520-6149 |
ispartof | 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010, p.4334-4337 |
issn | 1520-6149 2379-190X |
language | eng |
recordid | cdi_ieee_primary_5495646 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Automatic speech recognition Availability Hidden Markov models Humans Large vocabulary speech recognition Multilingual acoustic modeling Natural languages Robustness Space technology Speech recognition Subspace constraints Subspace Gaussian mixture model Training data |
title | Multilingual acoustic modeling for speech recognition based on subspace Gaussian Mixture Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T19%3A09%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Multilingual%20acoustic%20modeling%20for%20speech%20recognition%20based%20on%20subspace%20Gaussian%20Mixture%20Models&rft.btitle=2010%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing&rft.au=Burget,%20L&rft.date=2010-03&rft.spage=4334&rft.epage=4337&rft.pages=4334-4337&rft.issn=1520-6149&rft.eissn=2379-190X&rft.isbn=9781424442959&rft.isbn_list=1424442958&rft_id=info:doi/10.1109/ICASSP.2010.5495646&rft_dat=%3Cieee_6IE%3E5495646%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781424442966&rft.eisbn_list=1424442966&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5495646&rfr_iscdi=true |