Contextual content for voice user interfaces

The present disclosure describes techniques for dynamically determining when information is to be output to a user, as well as what information is to be output to a user. A natural language processing system may receive, from a first device, first data representing information to be output at a firs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yahia, Muhammad, Boehm, Kevin, Sauhta, Rohit, Kockerbeck, Mark Conrad, Hughes, Jordan Michael
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yahia, Muhammad
Boehm, Kevin
Sauhta, Rohit
Kockerbeck, Mark Conrad
Hughes, Jordan Michael
description The present disclosure describes techniques for dynamically determining when information is to be output to a user, as well as what information is to be output to a user. A natural language processing system may receive, from a first device, first data representing information to be output at a first point during a skill session. The natural language processing system may also receive, from a second device, second data representing a natural language input. The natural language processing system may determine a skill component is to execute with respect to the natural language input. The natural language processing system may send, to the skill component, second data representing the natural language input. The natural language processing system may receive, from the skill component, an indication that an ongoing first skill session with the second device has reached the first point. After receiving the indication and based at least in part on system usage data associated with at least one user, the natural language processing system may determine third data representing a prompt corresponding to the information and send, to the second device, the third data for output.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11227592B1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11227592B1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11227592B13</originalsourceid><addsrcrecordid>eNrjZNBxzs8rSa0oKU3MUUgGMfNKFNLyixTK8jOTUxVKi1OLFDKBokVpicmpxTwMrGmJOcWpvFCam0HRzTXE2UM3tSA_PrW4AKgmL7UkPjTY0NDIyNzU0sjJ0JgYNQC88CpB</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Contextual content for voice user interfaces</title><source>esp@cenet</source><creator>Yahia, Muhammad ; Boehm, Kevin ; Sauhta, Rohit ; Kockerbeck, Mark Conrad ; Hughes, Jordan Michael</creator><creatorcontrib>Yahia, Muhammad ; Boehm, Kevin ; Sauhta, Rohit ; Kockerbeck, Mark Conrad ; Hughes, Jordan Michael</creatorcontrib><description>The present disclosure describes techniques for dynamically determining when information is to be output to a user, as well as what information is to be output to a user. A natural language processing system may receive, from a first device, first data representing information to be output at a first point during a skill session. The natural language processing system may also receive, from a second device, second data representing a natural language input. The natural language processing system may determine a skill component is to execute with respect to the natural language input. The natural language processing system may send, to the skill component, second data representing the natural language input. The natural language processing system may receive, from the skill component, an indication that an ongoing first skill session with the second device has reached the first point. After receiving the indication and based at least in part on system usage data associated with at least one user, the natural language processing system may determine third data representing a prompt corresponding to the information and send, to the second device, the third data for output.</description><language>eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220118&amp;DB=EPODOC&amp;CC=US&amp;NR=11227592B1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76516</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220118&amp;DB=EPODOC&amp;CC=US&amp;NR=11227592B1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Yahia, Muhammad</creatorcontrib><creatorcontrib>Boehm, Kevin</creatorcontrib><creatorcontrib>Sauhta, Rohit</creatorcontrib><creatorcontrib>Kockerbeck, Mark Conrad</creatorcontrib><creatorcontrib>Hughes, Jordan Michael</creatorcontrib><title>Contextual content for voice user interfaces</title><description>The present disclosure describes techniques for dynamically determining when information is to be output to a user, as well as what information is to be output to a user. A natural language processing system may receive, from a first device, first data representing information to be output at a first point during a skill session. The natural language processing system may also receive, from a second device, second data representing a natural language input. The natural language processing system may determine a skill component is to execute with respect to the natural language input. The natural language processing system may send, to the skill component, second data representing the natural language input. The natural language processing system may receive, from the skill component, an indication that an ongoing first skill session with the second device has reached the first point. After receiving the indication and based at least in part on system usage data associated with at least one user, the natural language processing system may determine third data representing a prompt corresponding to the information and send, to the second device, the third data for output.</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNBxzs8rSa0oKU3MUUgGMfNKFNLyixTK8jOTUxVKi1OLFDKBokVpicmpxTwMrGmJOcWpvFCam0HRzTXE2UM3tSA_PrW4AKgmL7UkPjTY0NDIyNzU0sjJ0JgYNQC88CpB</recordid><startdate>20220118</startdate><enddate>20220118</enddate><creator>Yahia, Muhammad</creator><creator>Boehm, Kevin</creator><creator>Sauhta, Rohit</creator><creator>Kockerbeck, Mark Conrad</creator><creator>Hughes, Jordan Michael</creator><scope>EVB</scope></search><sort><creationdate>20220118</creationdate><title>Contextual content for voice user interfaces</title><author>Yahia, Muhammad ; Boehm, Kevin ; Sauhta, Rohit ; Kockerbeck, Mark Conrad ; Hughes, Jordan Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11227592B13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Yahia, Muhammad</creatorcontrib><creatorcontrib>Boehm, Kevin</creatorcontrib><creatorcontrib>Sauhta, Rohit</creatorcontrib><creatorcontrib>Kockerbeck, Mark Conrad</creatorcontrib><creatorcontrib>Hughes, Jordan Michael</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yahia, Muhammad</au><au>Boehm, Kevin</au><au>Sauhta, Rohit</au><au>Kockerbeck, Mark Conrad</au><au>Hughes, Jordan Michael</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Contextual content for voice user interfaces</title><date>2022-01-18</date><risdate>2022</risdate><abstract>The present disclosure describes techniques for dynamically determining when information is to be output to a user, as well as what information is to be output to a user. A natural language processing system may receive, from a first device, first data representing information to be output at a first point during a skill session. The natural language processing system may also receive, from a second device, second data representing a natural language input. The natural language processing system may determine a skill component is to execute with respect to the natural language input. The natural language processing system may send, to the skill component, second data representing the natural language input. The natural language processing system may receive, from the skill component, an indication that an ongoing first skill session with the second device has reached the first point. After receiving the indication and based at least in part on system usage data associated with at least one user, the natural language processing system may determine third data representing a prompt corresponding to the information and send, to the second device, the third data for output.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US11227592B1
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Contextual content for voice user interfaces
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T18%3A57%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Yahia,%20Muhammad&rft.date=2022-01-18&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11227592B1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true