Voice-controlled entry of content into graphical user interfaces

Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an int...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kothari, Luv, Kannan, Akshay, Ghosh, Angana, Behzadi, Behshad, Liu, Xu, Gultekin, Gokay Baris, Lu, Yang, Carbotta, Domenico, Pandiri, Srikanth, Cheng, Steve, Sabur, Zaheed, Wang, Qi
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kothari, Luv
Kannan, Akshay
Ghosh, Angana
Behzadi, Behshad
Liu, Xu
Gultekin, Gokay Baris
Lu, Yang
Carbotta, Domenico
Pandiri, Srikanth
Cheng, Steve
Sabur, Zaheed
Wang, Qi
description Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11853649B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11853649B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11853649B23</originalsourceid><addsrcrecordid>eNrjZHAIy89MTtVNzs8rKcrPyUlNUUgFsioV8tMUQGJAjkJmXkm-QnpRYkFGZnJijkJpcWoRSCy1KC0xObWYh4E1LTGnOJUXSnMzKLq5hjh76KYW5MenFhcA1eSllsSHBhsaWpgam5lYOhkZE6MGAP2JMeg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Voice-controlled entry of content into graphical user interfaces</title><source>esp@cenet</source><creator>Kothari, Luv ; Kannan, Akshay ; Ghosh, Angana ; Behzadi, Behshad ; Liu, Xu ; Gultekin, Gokay Baris ; Lu, Yang ; Carbotta, Domenico ; Pandiri, Srikanth ; Cheng, Steve ; Sabur, Zaheed ; Wang, Qi</creator><creatorcontrib>Kothari, Luv ; Kannan, Akshay ; Ghosh, Angana ; Behzadi, Behshad ; Liu, Xu ; Gultekin, Gokay Baris ; Lu, Yang ; Carbotta, Domenico ; Pandiri, Srikanth ; Cheng, Steve ; Sabur, Zaheed ; Wang, Qi</creatorcontrib><description>Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.</description><language>eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231226&amp;DB=EPODOC&amp;CC=US&amp;NR=11853649B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76419</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20231226&amp;DB=EPODOC&amp;CC=US&amp;NR=11853649B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Kothari, Luv</creatorcontrib><creatorcontrib>Kannan, Akshay</creatorcontrib><creatorcontrib>Ghosh, Angana</creatorcontrib><creatorcontrib>Behzadi, Behshad</creatorcontrib><creatorcontrib>Liu, Xu</creatorcontrib><creatorcontrib>Gultekin, Gokay Baris</creatorcontrib><creatorcontrib>Lu, Yang</creatorcontrib><creatorcontrib>Carbotta, Domenico</creatorcontrib><creatorcontrib>Pandiri, Srikanth</creatorcontrib><creatorcontrib>Cheng, Steve</creatorcontrib><creatorcontrib>Sabur, Zaheed</creatorcontrib><creatorcontrib>Wang, Qi</creatorcontrib><title>Voice-controlled entry of content into graphical user interfaces</title><description>Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAIy89MTtVNzs8rKcrPyUlNUUgFsioV8tMUQGJAjkJmXkm-QnpRYkFGZnJijkJpcWoRSCy1KC0xObWYh4E1LTGnOJUXSnMzKLq5hjh76KYW5MenFhcA1eSllsSHBhsaWpgam5lYOhkZE6MGAP2JMeg</recordid><startdate>20231226</startdate><enddate>20231226</enddate><creator>Kothari, Luv</creator><creator>Kannan, Akshay</creator><creator>Ghosh, Angana</creator><creator>Behzadi, Behshad</creator><creator>Liu, Xu</creator><creator>Gultekin, Gokay Baris</creator><creator>Lu, Yang</creator><creator>Carbotta, Domenico</creator><creator>Pandiri, Srikanth</creator><creator>Cheng, Steve</creator><creator>Sabur, Zaheed</creator><creator>Wang, Qi</creator><scope>EVB</scope></search><sort><creationdate>20231226</creationdate><title>Voice-controlled entry of content into graphical user interfaces</title><author>Kothari, Luv ; Kannan, Akshay ; Ghosh, Angana ; Behzadi, Behshad ; Liu, Xu ; Gultekin, Gokay Baris ; Lu, Yang ; Carbotta, Domenico ; Pandiri, Srikanth ; Cheng, Steve ; Sabur, Zaheed ; Wang, Qi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11853649B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Kothari, Luv</creatorcontrib><creatorcontrib>Kannan, Akshay</creatorcontrib><creatorcontrib>Ghosh, Angana</creatorcontrib><creatorcontrib>Behzadi, Behshad</creatorcontrib><creatorcontrib>Liu, Xu</creatorcontrib><creatorcontrib>Gultekin, Gokay Baris</creatorcontrib><creatorcontrib>Lu, Yang</creatorcontrib><creatorcontrib>Carbotta, Domenico</creatorcontrib><creatorcontrib>Pandiri, Srikanth</creatorcontrib><creatorcontrib>Cheng, Steve</creatorcontrib><creatorcontrib>Sabur, Zaheed</creatorcontrib><creatorcontrib>Wang, Qi</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kothari, Luv</au><au>Kannan, Akshay</au><au>Ghosh, Angana</au><au>Behzadi, Behshad</au><au>Liu, Xu</au><au>Gultekin, Gokay Baris</au><au>Lu, Yang</au><au>Carbotta, Domenico</au><au>Pandiri, Srikanth</au><au>Cheng, Steve</au><au>Sabur, Zaheed</au><au>Wang, Qi</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Voice-controlled entry of content into graphical user interfaces</title><date>2023-12-26</date><risdate>2023</risdate><abstract>Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11853649B2
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Voice-controlled entry of content into graphical user interfaces
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T01%3A24%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Kothari,%20Luv&rft.date=2023-12-26&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11853649B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true