Exploiting sparsity in stranded hidden Markov models for automatic speech recognition
We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible traj...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1625 |
---|---|
container_issue | |
container_start_page | 1623 |
container_title | |
container_volume | |
creator | Yong Zhao Biing-Hwang Juang |
description | We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible trajectories of acoustic features for speech units. Accurately representing the underlying transition structure is crucial for the stranded HMM to produce an optimal recognition performance. In this paper, we propose to learn the stranded HMM structure by imposing sparsity constraints. In particular, entropic priors are incorporated in the maximum a posteriori (MAP) estimation of the mixture transition matrices. The experimental results showed that a significant improvement in model sparsity can be obtained with a slight sacrifice of the recognition accuracy. |
doi_str_mv | 10.1109/ACSSC.2012.6489305 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6489305</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6489305</ieee_id><sourcerecordid>6489305</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-3be35857221e99fea1dd34c25e42e298378316e9eab12313e75243a5e1a987d53</originalsourceid><addsrcrecordid>eNot0M1OAjEUBeD6l4jIC-imLzDY29tO2yWZ-JdgXCBrUqYXqMKUtKORt5dEVmdxcr7FYewOxBhAuIdJM5s1YylAjmtlHQp9xm5A1Qa1UE6ds4HUpq4kCrxgI2fsqdMCLtkAhLZVjQ6v2aiUTyHEEa2dUwM2f_zdb1PsY7fmZe9zif2Bx46XPvsuUOCbGAJ1_M3nr_TDdynQtvBVytx_92nn-9ged0Tthmdq07o7Uqm7ZVcrvy00OuWQzZ8eP5qXavr-_NpMplUEo_sKl4TaaiMlkHMr8hACqlZqUpKks2gsQk2O_BIkApLRUqHXBN5ZEzQO2f2_G4losc9x5_NhcXoI_wCKslY4</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Exploiting sparsity in stranded hidden Markov models for automatic speech recognition</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Yong Zhao ; Biing-Hwang Juang</creator><creatorcontrib>Yong Zhao ; Biing-Hwang Juang</creatorcontrib><description>We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible trajectories of acoustic features for speech units. Accurately representing the underlying transition structure is crucial for the stranded HMM to produce an optimal recognition performance. In this paper, we propose to learn the stranded HMM structure by imposing sparsity constraints. In particular, entropic priors are incorporated in the maximum a posteriori (MAP) estimation of the mixture transition matrices. The experimental results showed that a significant improvement in model sparsity can be obtained with a slight sacrifice of the recognition accuracy.</description><identifier>ISSN: 1058-6393</identifier><identifier>ISBN: 9781467350501</identifier><identifier>ISBN: 1467350508</identifier><identifier>EISSN: 2576-2303</identifier><identifier>EISBN: 1467350494</identifier><identifier>EISBN: 9781467350495</identifier><identifier>EISBN: 1467350516</identifier><identifier>EISBN: 9781467350518</identifier><identifier>DOI: 10.1109/ACSSC.2012.6489305</identifier><language>eng</language><publisher>IEEE</publisher><subject>hidden Markov model ; Speech recognition</subject><ispartof>2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 2012, p.1623-1625</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6489305$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6489305$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yong Zhao</creatorcontrib><creatorcontrib>Biing-Hwang Juang</creatorcontrib><title>Exploiting sparsity in stranded hidden Markov models for automatic speech recognition</title><title>2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR)</title><addtitle>ACSSC</addtitle><description>We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible trajectories of acoustic features for speech units. Accurately representing the underlying transition structure is crucial for the stranded HMM to produce an optimal recognition performance. In this paper, we propose to learn the stranded HMM structure by imposing sparsity constraints. In particular, entropic priors are incorporated in the maximum a posteriori (MAP) estimation of the mixture transition matrices. The experimental results showed that a significant improvement in model sparsity can be obtained with a slight sacrifice of the recognition accuracy.</description><subject>hidden Markov model</subject><subject>Speech recognition</subject><issn>1058-6393</issn><issn>2576-2303</issn><isbn>9781467350501</isbn><isbn>1467350508</isbn><isbn>1467350494</isbn><isbn>9781467350495</isbn><isbn>1467350516</isbn><isbn>9781467350518</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNot0M1OAjEUBeD6l4jIC-imLzDY29tO2yWZ-JdgXCBrUqYXqMKUtKORt5dEVmdxcr7FYewOxBhAuIdJM5s1YylAjmtlHQp9xm5A1Qa1UE6ds4HUpq4kCrxgI2fsqdMCLtkAhLZVjQ6v2aiUTyHEEa2dUwM2f_zdb1PsY7fmZe9zif2Bx46XPvsuUOCbGAJ1_M3nr_TDdynQtvBVytx_92nn-9ged0Tthmdq07o7Uqm7ZVcrvy00OuWQzZ8eP5qXavr-_NpMplUEo_sKl4TaaiMlkHMr8hACqlZqUpKks2gsQk2O_BIkApLRUqHXBN5ZEzQO2f2_G4losc9x5_NhcXoI_wCKslY4</recordid><startdate>201211</startdate><enddate>201211</enddate><creator>Yong Zhao</creator><creator>Biing-Hwang Juang</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201211</creationdate><title>Exploiting sparsity in stranded hidden Markov models for automatic speech recognition</title><author>Yong Zhao ; Biing-Hwang Juang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-3be35857221e99fea1dd34c25e42e298378316e9eab12313e75243a5e1a987d53</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>hidden Markov model</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yong Zhao</creatorcontrib><creatorcontrib>Biing-Hwang Juang</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yong Zhao</au><au>Biing-Hwang Juang</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Exploiting sparsity in stranded hidden Markov models for automatic speech recognition</atitle><btitle>2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR)</btitle><stitle>ACSSC</stitle><date>2012-11</date><risdate>2012</risdate><spage>1623</spage><epage>1625</epage><pages>1623-1625</pages><issn>1058-6393</issn><eissn>2576-2303</eissn><isbn>9781467350501</isbn><isbn>1467350508</isbn><eisbn>1467350494</eisbn><eisbn>9781467350495</eisbn><eisbn>1467350516</eisbn><eisbn>9781467350518</eisbn><abstract>We have recently proposed the stranded HMM to achieve a more accurate representation of heterogeneous data. As opposed to the regular Gaussian mixture HMM, the stranded HMM explicitly models the relationships among the mixture components. The transitions among mixture components encode possible trajectories of acoustic features for speech units. Accurately representing the underlying transition structure is crucial for the stranded HMM to produce an optimal recognition performance. In this paper, we propose to learn the stranded HMM structure by imposing sparsity constraints. In particular, entropic priors are incorporated in the maximum a posteriori (MAP) estimation of the mixture transition matrices. The experimental results showed that a significant improvement in model sparsity can be obtained with a slight sacrifice of the recognition accuracy.</abstract><pub>IEEE</pub><doi>10.1109/ACSSC.2012.6489305</doi><tpages>3</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1058-6393 |
ispartof | 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 2012, p.1623-1625 |
issn | 1058-6393 2576-2303 |
language | eng |
recordid | cdi_ieee_primary_6489305 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | hidden Markov model Speech recognition |
title | Exploiting sparsity in stranded hidden Markov models for automatic speech recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T13%3A21%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Exploiting%20sparsity%20in%20stranded%20hidden%20Markov%20models%20for%20automatic%20speech%20recognition&rft.btitle=2012%20Conference%20Record%20of%20the%20Forty%20Sixth%20Asilomar%20Conference%20on%20Signals,%20Systems%20and%20Computers%20(ASILOMAR)&rft.au=Yong%20Zhao&rft.date=2012-11&rft.spage=1623&rft.epage=1625&rft.pages=1623-1625&rft.issn=1058-6393&rft.eissn=2576-2303&rft.isbn=9781467350501&rft.isbn_list=1467350508&rft_id=info:doi/10.1109/ACSSC.2012.6489305&rft_dat=%3Cieee_6IE%3E6489305%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1467350494&rft.eisbn_list=9781467350495&rft.eisbn_list=1467350516&rft.eisbn_list=9781467350518&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6489305&rfr_iscdi=true |