VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION

Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: SABUR, Zaheed, ABDAGIC, Alvin, NATTA, Jacopo Sannazzaro, CARBUNE, Victor, PANDIRI, Srikanth, PROSKURNIA, Julia, GOJ, Krzystof Andrzej, ZARINS, Viesturs, BEHZADI, Behshad, D'ERCOLE, Nicolo, KOTHARI, Luv
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator SABUR, Zaheed
ABDAGIC, Alvin
NATTA, Jacopo Sannazzaro
CARBUNE, Victor
PANDIRI, Srikanth
PROSKURNIA, Julia
GOJ, Krzystof Andrzej
ZARINS, Viesturs
BEHZADI, Behshad
D'ERCOLE, Nicolo
KOTHARI, Luv
description Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP4115278A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP4115278A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP4115278A13</originalsourceid><addsrcrecordid>eNrjZHAN8_d0dlVw9vf1dfRzCVZw8w9ScPRTcAwN8fd1DHF1UXAMDvYMDnH0C1EIDfH08YwCCnn6KQT7OgaFKLh4Ooc4hnj6-_EwsKYl5hSn8kJpbgYFN9cQZw_d1IL8-NTigsTk1LzUknjXABNDQ1MjcwtHQ2MilAAALpgrrw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION</title><source>esp@cenet</source><creator>SABUR, Zaheed ; ABDAGIC, Alvin ; NATTA, Jacopo Sannazzaro ; CARBUNE, Victor ; PANDIRI, Srikanth ; PROSKURNIA, Julia ; GOJ, Krzystof Andrzej ; ZARINS, Viesturs ; BEHZADI, Behshad ; D'ERCOLE, Nicolo ; KOTHARI, Luv</creator><creatorcontrib>SABUR, Zaheed ; ABDAGIC, Alvin ; NATTA, Jacopo Sannazzaro ; CARBUNE, Victor ; PANDIRI, Srikanth ; PROSKURNIA, Julia ; GOJ, Krzystof Andrzej ; ZARINS, Viesturs ; BEHZADI, Behshad ; D'ERCOLE, Nicolo ; KOTHARI, Luv</creatorcontrib><description>Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.</description><language>eng ; fre ; ger</language><subject>CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230111&amp;DB=EPODOC&amp;CC=EP&amp;NR=4115278A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230111&amp;DB=EPODOC&amp;CC=EP&amp;NR=4115278A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>SABUR, Zaheed</creatorcontrib><creatorcontrib>ABDAGIC, Alvin</creatorcontrib><creatorcontrib>NATTA, Jacopo Sannazzaro</creatorcontrib><creatorcontrib>CARBUNE, Victor</creatorcontrib><creatorcontrib>PANDIRI, Srikanth</creatorcontrib><creatorcontrib>PROSKURNIA, Julia</creatorcontrib><creatorcontrib>GOJ, Krzystof Andrzej</creatorcontrib><creatorcontrib>ZARINS, Viesturs</creatorcontrib><creatorcontrib>BEHZADI, Behshad</creatorcontrib><creatorcontrib>D'ERCOLE, Nicolo</creatorcontrib><creatorcontrib>KOTHARI, Luv</creatorcontrib><title>VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION</title><description>Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAN8_d0dlVw9vf1dfRzCVZw8w9ScPRTcAwN8fd1DHF1UXAMDvYMDnH0C1EIDfH08YwCCnn6KQT7OgaFKLh4Ooc4hnj6-_EwsKYl5hSn8kJpbgYFN9cQZw_d1IL8-NTigsTk1LzUknjXABNDQ1MjcwtHQ2MilAAALpgrrw</recordid><startdate>20230111</startdate><enddate>20230111</enddate><creator>SABUR, Zaheed</creator><creator>ABDAGIC, Alvin</creator><creator>NATTA, Jacopo Sannazzaro</creator><creator>CARBUNE, Victor</creator><creator>PANDIRI, Srikanth</creator><creator>PROSKURNIA, Julia</creator><creator>GOJ, Krzystof Andrzej</creator><creator>ZARINS, Viesturs</creator><creator>BEHZADI, Behshad</creator><creator>D'ERCOLE, Nicolo</creator><creator>KOTHARI, Luv</creator><scope>EVB</scope></search><sort><creationdate>20230111</creationdate><title>VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION</title><author>SABUR, Zaheed ; ABDAGIC, Alvin ; NATTA, Jacopo Sannazzaro ; CARBUNE, Victor ; PANDIRI, Srikanth ; PROSKURNIA, Julia ; GOJ, Krzystof Andrzej ; ZARINS, Viesturs ; BEHZADI, Behshad ; D'ERCOLE, Nicolo ; KOTHARI, Luv</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP4115278A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>SABUR, Zaheed</creatorcontrib><creatorcontrib>ABDAGIC, Alvin</creatorcontrib><creatorcontrib>NATTA, Jacopo Sannazzaro</creatorcontrib><creatorcontrib>CARBUNE, Victor</creatorcontrib><creatorcontrib>PANDIRI, Srikanth</creatorcontrib><creatorcontrib>PROSKURNIA, Julia</creatorcontrib><creatorcontrib>GOJ, Krzystof Andrzej</creatorcontrib><creatorcontrib>ZARINS, Viesturs</creatorcontrib><creatorcontrib>BEHZADI, Behshad</creatorcontrib><creatorcontrib>D'ERCOLE, Nicolo</creatorcontrib><creatorcontrib>KOTHARI, Luv</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>SABUR, Zaheed</au><au>ABDAGIC, Alvin</au><au>NATTA, Jacopo Sannazzaro</au><au>CARBUNE, Victor</au><au>PANDIRI, Srikanth</au><au>PROSKURNIA, Julia</au><au>GOJ, Krzystof Andrzej</au><au>ZARINS, Viesturs</au><au>BEHZADI, Behshad</au><au>D'ERCOLE, Nicolo</au><au>KOTHARI, Luv</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION</title><date>2023-01-11</date><risdate>2023</risdate><abstract>Systems and methods described herein relate to determining whether to incorporate recognized text, that corresponds to a spoken utterance of a user of a client device, into a transcription displayed at the client device, or to cause an assistant command, that is associated with the transcription and that is based on the recognized text, to be performed by an automated assistant implemented by the client device. The spoken utterance is received during a dictation session between the user and the automated assistant. Implementations can process, using automatic speech recognition model(s), audio data that captures the spoken utterance to generate the recognized text. Further, implementations can determine whether to incorporate the recognized text into the transcription or cause the assistant command to be performed based on touch input being directed to the transcription, a state of the transcription, and/or audio-based characteristic(s) of the spoken utterance.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP4115278A1
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title VOICE COMMANDS FOR AN AUTOMATED ASSISTANT UTILIZED IN SMART DICTATION
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T02%3A25%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=SABUR,%20Zaheed&rft.date=2023-01-11&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP4115278A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true