Time-based context for voice user interface
A media marker mechanism may be used to send, to a cloud service, up-to-date context regarding playback of a media content stream on a user device. The cloud service may insert a media content item into a media content stream and/or combine media content items into a media content stream. The cloud...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chen, Song Aginlar, Volkan Sepasi Ahoei, Arash Tomlinson, Jack Andrew Lu, Lei Raymond Urtnowski, Matthew Brian Ramaswamy, Arun |
description | A media marker mechanism may be used to send, to a cloud service, up-to-date context regarding playback of a media content stream on a user device. The cloud service may insert a media content item into a media content stream and/or combine media content items into a media content stream. The cloud service may implement the media marker mechanism to tag a content item with metadata that can be read by the user device. The user device can play the streaming media content and, when the tagged content item plays, read the metadata and send it to the cloud service. The cloud service can use the metadata to enrich the media content delivery by, for example, providing a companion image that the user device can display while paying the tagged content item, providing a link to make the companion image clickable, handling requests referring to the media content, etc. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11977816B1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11977816B1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11977816B13</originalsourceid><addsrcrecordid>eNrjZNAOycxN1U1KLE5NUUjOzytJrShRSMsvUijLz0xOVSgtTi1SyASKFqUlJqfyMLCmJeYUp_JCaW4GRTfXEGcP3dSC_PjU4gKgkrzUkvjQYENDS3NzC0MzJ0NjYtQAAFVfKYE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Time-based context for voice user interface</title><source>esp@cenet</source><creator>Chen, Song ; Aginlar, Volkan ; Sepasi Ahoei, Arash ; Tomlinson, Jack Andrew ; Lu, Lei Raymond ; Urtnowski, Matthew Brian ; Ramaswamy, Arun</creator><creatorcontrib>Chen, Song ; Aginlar, Volkan ; Sepasi Ahoei, Arash ; Tomlinson, Jack Andrew ; Lu, Lei Raymond ; Urtnowski, Matthew Brian ; Ramaswamy, Arun</creatorcontrib><description>A media marker mechanism may be used to send, to a cloud service, up-to-date context regarding playback of a media content stream on a user device. The cloud service may insert a media content item into a media content stream and/or combine media content items into a media content stream. The cloud service may implement the media marker mechanism to tag a content item with metadata that can be read by the user device. The user device can play the streaming media content and, when the tagged content item plays, read the metadata and send it to the cloud service. The cloud service can use the metadata to enrich the media content delivery by, for example, providing a companion image that the user device can display while paying the tagged content item, providing a link to make the companion image clickable, handling requests referring to the media content, etc.</description><language>eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240507&DB=EPODOC&CC=US&NR=11977816B1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240507&DB=EPODOC&CC=US&NR=11977816B1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Chen, Song</creatorcontrib><creatorcontrib>Aginlar, Volkan</creatorcontrib><creatorcontrib>Sepasi Ahoei, Arash</creatorcontrib><creatorcontrib>Tomlinson, Jack Andrew</creatorcontrib><creatorcontrib>Lu, Lei Raymond</creatorcontrib><creatorcontrib>Urtnowski, Matthew Brian</creatorcontrib><creatorcontrib>Ramaswamy, Arun</creatorcontrib><title>Time-based context for voice user interface</title><description>A media marker mechanism may be used to send, to a cloud service, up-to-date context regarding playback of a media content stream on a user device. The cloud service may insert a media content item into a media content stream and/or combine media content items into a media content stream. The cloud service may implement the media marker mechanism to tag a content item with metadata that can be read by the user device. The user device can play the streaming media content and, when the tagged content item plays, read the metadata and send it to the cloud service. The cloud service can use the metadata to enrich the media content delivery by, for example, providing a companion image that the user device can display while paying the tagged content item, providing a link to make the companion image clickable, handling requests referring to the media content, etc.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNAOycxN1U1KLE5NUUjOzytJrShRSMsvUijLz0xOVSgtTi1SyASKFqUlJqfyMLCmJeYUp_JCaW4GRTfXEGcP3dSC_PjU4gKgkrzUkvjQYENDS3NzC0MzJ0NjYtQAAFVfKYE</recordid><startdate>20240507</startdate><enddate>20240507</enddate><creator>Chen, Song</creator><creator>Aginlar, Volkan</creator><creator>Sepasi Ahoei, Arash</creator><creator>Tomlinson, Jack Andrew</creator><creator>Lu, Lei Raymond</creator><creator>Urtnowski, Matthew Brian</creator><creator>Ramaswamy, Arun</creator><scope>EVB</scope></search><sort><creationdate>20240507</creationdate><title>Time-based context for voice user interface</title><author>Chen, Song ; Aginlar, Volkan ; Sepasi Ahoei, Arash ; Tomlinson, Jack Andrew ; Lu, Lei Raymond ; Urtnowski, Matthew Brian ; Ramaswamy, Arun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11977816B13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Song</creatorcontrib><creatorcontrib>Aginlar, Volkan</creatorcontrib><creatorcontrib>Sepasi Ahoei, Arash</creatorcontrib><creatorcontrib>Tomlinson, Jack Andrew</creatorcontrib><creatorcontrib>Lu, Lei Raymond</creatorcontrib><creatorcontrib>Urtnowski, Matthew Brian</creatorcontrib><creatorcontrib>Ramaswamy, Arun</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Song</au><au>Aginlar, Volkan</au><au>Sepasi Ahoei, Arash</au><au>Tomlinson, Jack Andrew</au><au>Lu, Lei Raymond</au><au>Urtnowski, Matthew Brian</au><au>Ramaswamy, Arun</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Time-based context for voice user interface</title><date>2024-05-07</date><risdate>2024</risdate><abstract>A media marker mechanism may be used to send, to a cloud service, up-to-date context regarding playback of a media content stream on a user device. The cloud service may insert a media content item into a media content stream and/or combine media content items into a media content stream. The cloud service may implement the media marker mechanism to tag a content item with metadata that can be read by the user device. The user device can play the streaming media content and, when the tagged content item plays, read the metadata and send it to the cloud service. The cloud service can use the metadata to enrich the media content delivery by, for example, providing a companion image that the user device can display while paying the tagged content item, providing a link to make the companion image clickable, handling requests referring to the media content, etc.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US11977816B1 |
source | esp@cenet |
subjects | ACOUSTICS CALCULATING COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING MUSICAL INSTRUMENTS PHYSICS SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION |
title | Time-based context for voice user interface |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A50%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Chen,%20Song&rft.date=2024-05-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11977816B1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |