COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES
Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken ut...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Patent |
Sprache: | eng ; fre ; ger |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | CHATHAM, Brian GOEL, Akshay PARK, Richard SANCHEZ, David LAPCHUK, Dmytro KHANDELWAL, Nitin ECCLES, Jonathan |
description | Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation. In additional or alternative implementations, an independent third-party agent can obtain the first-party interpretation(s) and the third-party interpretation(s), select the given interpretation, and then transmit the given interpretation to the automated assistant and/or the given third-party agent. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP4169016B1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP4169016B1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP4169016B13</originalsourceid><addsrcrecordid>eNrjZLB19vfxcXTyD3IM8QxzVQhy9PP29HNX8HdT8PQLcQ0KCHINAcr4-wWDhIID_L1d_RRCQ4Ayjn7OrsE8DKxpiTnFqbxQmptBwc01xNlDN7UgPz61uCAxOTUvtSTeNcDE0MzSwNDMydCYCCUAC64p2A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES</title><source>esp@cenet</source><creator>CHATHAM, Brian ; GOEL, Akshay ; PARK, Richard ; SANCHEZ, David ; LAPCHUK, Dmytro ; KHANDELWAL, Nitin ; ECCLES, Jonathan</creator><creatorcontrib>CHATHAM, Brian ; GOEL, Akshay ; PARK, Richard ; SANCHEZ, David ; LAPCHUK, Dmytro ; KHANDELWAL, Nitin ; ECCLES, Jonathan</creatorcontrib><description>Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation. In additional or alternative implementations, an independent third-party agent can obtain the first-party interpretation(s) and the third-party interpretation(s), select the given interpretation, and then transmit the given interpretation to the automated assistant and/or the given third-party agent.</description><language>eng ; fre ; ger</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240403&DB=EPODOC&CC=EP&NR=4169016B1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240403&DB=EPODOC&CC=EP&NR=4169016B1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHATHAM, Brian</creatorcontrib><creatorcontrib>GOEL, Akshay</creatorcontrib><creatorcontrib>PARK, Richard</creatorcontrib><creatorcontrib>SANCHEZ, David</creatorcontrib><creatorcontrib>LAPCHUK, Dmytro</creatorcontrib><creatorcontrib>KHANDELWAL, Nitin</creatorcontrib><creatorcontrib>ECCLES, Jonathan</creatorcontrib><title>COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES</title><description>Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation. In additional or alternative implementations, an independent third-party agent can obtain the first-party interpretation(s) and the third-party interpretation(s), select the given interpretation, and then transmit the given interpretation to the automated assistant and/or the given third-party agent.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLB19vfxcXTyD3IM8QxzVQhy9PP29HNX8HdT8PQLcQ0KCHINAcr4-wWDhIID_L1d_RRCQ4Ayjn7OrsE8DKxpiTnFqbxQmptBwc01xNlDN7UgPz61uCAxOTUvtSTeNcDE0MzSwNDMydCYCCUAC64p2A</recordid><startdate>20240403</startdate><enddate>20240403</enddate><creator>CHATHAM, Brian</creator><creator>GOEL, Akshay</creator><creator>PARK, Richard</creator><creator>SANCHEZ, David</creator><creator>LAPCHUK, Dmytro</creator><creator>KHANDELWAL, Nitin</creator><creator>ECCLES, Jonathan</creator><scope>EVB</scope></search><sort><creationdate>20240403</creationdate><title>COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES</title><author>CHATHAM, Brian ; GOEL, Akshay ; PARK, Richard ; SANCHEZ, David ; LAPCHUK, Dmytro ; KHANDELWAL, Nitin ; ECCLES, Jonathan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP4169016B13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>CHATHAM, Brian</creatorcontrib><creatorcontrib>GOEL, Akshay</creatorcontrib><creatorcontrib>PARK, Richard</creatorcontrib><creatorcontrib>SANCHEZ, David</creatorcontrib><creatorcontrib>LAPCHUK, Dmytro</creatorcontrib><creatorcontrib>KHANDELWAL, Nitin</creatorcontrib><creatorcontrib>ECCLES, Jonathan</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHATHAM, Brian</au><au>GOEL, Akshay</au><au>PARK, Richard</au><au>SANCHEZ, David</au><au>LAPCHUK, Dmytro</au><au>KHANDELWAL, Nitin</au><au>ECCLES, Jonathan</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES</title><date>2024-04-03</date><risdate>2024</risdate><abstract>Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation. In additional or alternative implementations, an independent third-party agent can obtain the first-party interpretation(s) and the third-party interpretation(s), select the given interpretation, and then transmit the given interpretation to the automated assistant and/or the given third-party agent.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng ; fre ; ger |
recordid | cdi_epo_espacenet_EP4169016B1 |
source | esp@cenet |
subjects | ACOUSTICS CALCULATING COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING MUSICAL INSTRUMENTS PHYSICS SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION |
title | COLLABORATIVE RANKING OF INTERPRETATIONS OF SPOKEN UTTERANCES |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T06%3A55%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHATHAM,%20Brian&rft.date=2024-04-03&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP4169016B1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |