MODULAR DEEP LEARNING MODEL

The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: KUMAR Kshitiz, HUANG Yan, KALGAONKAR Kaustubh Prakash, GONG Yifan, LIU Chaojun
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator KUMAR Kshitiz
HUANG Yan
KALGAONKAR Kaustubh Prakash
GONG Yifan
LIU Chaojun
description The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as accent origins (e.g. native, non-native), speech channels (e.g. mobile, bluetooth, desktop etc.), speech application scenario (e.g. voice search, short message dictation etc.), and speaker variation (e.g. individual speakers or clustered speakers), etc. The technology described herein uses certain sub-modules in a first context and a second group of sub-modules in a second context.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2017256254A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2017256254A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2017256254A13</originalsourceid><addsrcrecordid>eNrjZJD29XcJ9XEMUnBxdQ1Q8HF1DPLz9HNXAIq6-vAwsKYl5hSn8kJpbgZlN9cQZw_d1IL8-NTigsTk1LzUkvjQYCMDQ3MjUzMjUxNHQ2PiVAEAwuUhTQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>MODULAR DEEP LEARNING MODEL</title><source>esp@cenet</source><creator>KUMAR Kshitiz ; HUANG Yan ; KALGAONKAR Kaustubh Prakash ; GONG Yifan ; LIU Chaojun</creator><creatorcontrib>KUMAR Kshitiz ; HUANG Yan ; KALGAONKAR Kaustubh Prakash ; GONG Yifan ; LIU Chaojun</creatorcontrib><description>The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as accent origins (e.g. native, non-native), speech channels (e.g. mobile, bluetooth, desktop etc.), speech application scenario (e.g. voice search, short message dictation etc.), and speaker variation (e.g. individual speakers or clustered speakers), etc. The technology described herein uses certain sub-modules in a first context and a second group of sub-modules in a second context.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2017</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20170907&amp;DB=EPODOC&amp;CC=US&amp;NR=2017256254A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20170907&amp;DB=EPODOC&amp;CC=US&amp;NR=2017256254A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>KUMAR Kshitiz</creatorcontrib><creatorcontrib>HUANG Yan</creatorcontrib><creatorcontrib>KALGAONKAR Kaustubh Prakash</creatorcontrib><creatorcontrib>GONG Yifan</creatorcontrib><creatorcontrib>LIU Chaojun</creatorcontrib><title>MODULAR DEEP LEARNING MODEL</title><description>The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as accent origins (e.g. native, non-native), speech channels (e.g. mobile, bluetooth, desktop etc.), speech application scenario (e.g. voice search, short message dictation etc.), and speaker variation (e.g. individual speakers or clustered speakers), etc. The technology described herein uses certain sub-modules in a first context and a second group of sub-modules in a second context.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2017</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZJD29XcJ9XEMUnBxdQ1Q8HF1DPLz9HNXAIq6-vAwsKYl5hSn8kJpbgZlN9cQZw_d1IL8-NTigsTk1LzUkvjQYCMDQ3MjUzMjUxNHQ2PiVAEAwuUhTQ</recordid><startdate>20170907</startdate><enddate>20170907</enddate><creator>KUMAR Kshitiz</creator><creator>HUANG Yan</creator><creator>KALGAONKAR Kaustubh Prakash</creator><creator>GONG Yifan</creator><creator>LIU Chaojun</creator><scope>EVB</scope></search><sort><creationdate>20170907</creationdate><title>MODULAR DEEP LEARNING MODEL</title><author>KUMAR Kshitiz ; HUANG Yan ; KALGAONKAR Kaustubh Prakash ; GONG Yifan ; LIU Chaojun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2017256254A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2017</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>KUMAR Kshitiz</creatorcontrib><creatorcontrib>HUANG Yan</creatorcontrib><creatorcontrib>KALGAONKAR Kaustubh Prakash</creatorcontrib><creatorcontrib>GONG Yifan</creatorcontrib><creatorcontrib>LIU Chaojun</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>KUMAR Kshitiz</au><au>HUANG Yan</au><au>KALGAONKAR Kaustubh Prakash</au><au>GONG Yifan</au><au>LIU Chaojun</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>MODULAR DEEP LEARNING MODEL</title><date>2017-09-07</date><risdate>2017</risdate><abstract>The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as accent origins (e.g. native, non-native), speech channels (e.g. mobile, bluetooth, desktop etc.), speech application scenario (e.g. voice search, short message dictation etc.), and speaker variation (e.g. individual speakers or clustered speakers), etc. The technology described herein uses certain sub-modules in a first context and a second group of sub-modules in a second context.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2017256254A1
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title MODULAR DEEP LEARNING MODEL
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T03%3A37%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=KUMAR%20Kshitiz&rft.date=2017-09-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2017256254A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true