FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES

Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: VUSKOVIC, Vladimir, JAIN, Vibhor, LEI, Jinna, KUCZMARSKI, James, IKEDA, Daisuke, SUBRAMANYA, Amarnag, NIU, Mengmeng, JOHNSON PREMKUMAR, Melvin Jose, DAI, Luna, BALANI, Nihal Sandeep, RANJAN, Nimesh
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator VUSKOVIC, Vladimir
JAIN, Vibhor
LEI, Jinna
KUCZMARSKI, James
IKEDA, Daisuke
SUBRAMANYA, Amarnag
NIU, Mengmeng
JOHNSON PREMKUMAR, Melvin Jose
DAI, Luna
BALANI, Nihal Sandeep
RANJAN, Nimesh
description Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP3716267B1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP3716267B1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP3716267B13</originalsourceid><addsrcrecordid>eNqNyrEKwjAQgOEsDqK-w71Ahlpo5zNN04PkUshFx1IkTqKF-v7YwQdw-ob_36trj4Y8CQqxA8udlqg3wMQQMpPZQuQEN5IBMEsMKLYDTImSIEsCYgjZC43egkd2GZ1NR7V7zM-1nH4eFPRWzKDL8p7Kusz38iqfyY51WzXnpr1U9R_LF1seMLE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES</title><source>esp@cenet</source><creator>VUSKOVIC, Vladimir ; JAIN, Vibhor ; LEI, Jinna ; KUCZMARSKI, James ; IKEDA, Daisuke ; SUBRAMANYA, Amarnag ; NIU, Mengmeng ; JOHNSON PREMKUMAR, Melvin Jose ; DAI, Luna ; BALANI, Nihal Sandeep ; RANJAN, Nimesh</creator><creatorcontrib>VUSKOVIC, Vladimir ; JAIN, Vibhor ; LEI, Jinna ; KUCZMARSKI, James ; IKEDA, Daisuke ; SUBRAMANYA, Amarnag ; NIU, Mengmeng ; JOHNSON PREMKUMAR, Melvin Jose ; DAI, Luna ; BALANI, Nihal Sandeep ; RANJAN, Nimesh</creatorcontrib><description>Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.</description><language>eng ; fre ; ger</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230412&amp;DB=EPODOC&amp;CC=EP&amp;NR=3716267B1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,309,781,886,25569,76552</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230412&amp;DB=EPODOC&amp;CC=EP&amp;NR=3716267B1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>VUSKOVIC, Vladimir</creatorcontrib><creatorcontrib>JAIN, Vibhor</creatorcontrib><creatorcontrib>LEI, Jinna</creatorcontrib><creatorcontrib>KUCZMARSKI, James</creatorcontrib><creatorcontrib>IKEDA, Daisuke</creatorcontrib><creatorcontrib>SUBRAMANYA, Amarnag</creatorcontrib><creatorcontrib>NIU, Mengmeng</creatorcontrib><creatorcontrib>JOHNSON PREMKUMAR, Melvin Jose</creatorcontrib><creatorcontrib>DAI, Luna</creatorcontrib><creatorcontrib>BALANI, Nihal Sandeep</creatorcontrib><creatorcontrib>RANJAN, Nimesh</creatorcontrib><title>FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES</title><description>Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNyrEKwjAQgOEsDqK-w71Ahlpo5zNN04PkUshFx1IkTqKF-v7YwQdw-ob_36trj4Y8CQqxA8udlqg3wMQQMpPZQuQEN5IBMEsMKLYDTImSIEsCYgjZC43egkd2GZ1NR7V7zM-1nH4eFPRWzKDL8p7Kusz38iqfyY51WzXnpr1U9R_LF1seMLE</recordid><startdate>20230412</startdate><enddate>20230412</enddate><creator>VUSKOVIC, Vladimir</creator><creator>JAIN, Vibhor</creator><creator>LEI, Jinna</creator><creator>KUCZMARSKI, James</creator><creator>IKEDA, Daisuke</creator><creator>SUBRAMANYA, Amarnag</creator><creator>NIU, Mengmeng</creator><creator>JOHNSON PREMKUMAR, Melvin Jose</creator><creator>DAI, Luna</creator><creator>BALANI, Nihal Sandeep</creator><creator>RANJAN, Nimesh</creator><scope>EVB</scope></search><sort><creationdate>20230412</creationdate><title>FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES</title><author>VUSKOVIC, Vladimir ; JAIN, Vibhor ; LEI, Jinna ; KUCZMARSKI, James ; IKEDA, Daisuke ; SUBRAMANYA, Amarnag ; NIU, Mengmeng ; JOHNSON PREMKUMAR, Melvin Jose ; DAI, Luna ; BALANI, Nihal Sandeep ; RANJAN, Nimesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP3716267B13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2023</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>VUSKOVIC, Vladimir</creatorcontrib><creatorcontrib>JAIN, Vibhor</creatorcontrib><creatorcontrib>LEI, Jinna</creatorcontrib><creatorcontrib>KUCZMARSKI, James</creatorcontrib><creatorcontrib>IKEDA, Daisuke</creatorcontrib><creatorcontrib>SUBRAMANYA, Amarnag</creatorcontrib><creatorcontrib>NIU, Mengmeng</creatorcontrib><creatorcontrib>JOHNSON PREMKUMAR, Melvin Jose</creatorcontrib><creatorcontrib>DAI, Luna</creatorcontrib><creatorcontrib>BALANI, Nihal Sandeep</creatorcontrib><creatorcontrib>RANJAN, Nimesh</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>VUSKOVIC, Vladimir</au><au>JAIN, Vibhor</au><au>LEI, Jinna</au><au>KUCZMARSKI, James</au><au>IKEDA, Daisuke</au><au>SUBRAMANYA, Amarnag</au><au>NIU, Mengmeng</au><au>JOHNSON PREMKUMAR, Melvin Jose</au><au>DAI, Luna</au><au>BALANI, Nihal Sandeep</au><au>RANJAN, Nimesh</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES</title><date>2023-04-12</date><risdate>2023</risdate><abstract>Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP3716267B1
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title FACILITATING END-TO-END COMMUNICATIONS WITH AUTOMATED ASSISTANTS IN MULTIPLE LANGUAGES
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T03%3A50%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=VUSKOVIC,%20Vladimir&rft.date=2023-04-12&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP3716267B1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true