MACHINE-LEARNING BASED GESTURE RECOGNITION

The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WESTING, Brandt M, AVERY, Keith P, DHANANI, Jamil, MAUDGALYA, Varun, JEONG, Minwoo, RUDCHENKO, Dmytro, KAUR, Harveen, PAEK, Timothy S
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WESTING, Brandt M
AVERY, Keith P
DHANANI, Jamil
MAUDGALYA, Varun
JEONG, Minwoo
RUDCHENKO, Dmytro
KAUR, Harveen
PAEK, Timothy S
description The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP4004693A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP4004693A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP4004693A13</originalsourceid><addsrcrecordid>eNrjZNDydXT28PRz1fVxdQzy8_RzV3ByDHZ1UXB3DQ4JDXJVCHJ19nf38wzx9PfjYWBNS8wpTuWF0twMCm6uIc4euqkF-fGpxQWJyal5qSXxrgEmBgYmZpbGjobGRCgBAAdBJD4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>MACHINE-LEARNING BASED GESTURE RECOGNITION</title><source>esp@cenet</source><creator>WESTING, Brandt M ; AVERY, Keith P ; DHANANI, Jamil ; MAUDGALYA, Varun ; JEONG, Minwoo ; RUDCHENKO, Dmytro ; KAUR, Harveen ; PAEK, Timothy S</creator><creatorcontrib>WESTING, Brandt M ; AVERY, Keith P ; DHANANI, Jamil ; MAUDGALYA, Varun ; JEONG, Minwoo ; RUDCHENKO, Dmytro ; KAUR, Harveen ; PAEK, Timothy S</creatorcontrib><description>The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.</description><language>eng ; fre ; ger</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220601&amp;DB=EPODOC&amp;CC=EP&amp;NR=4004693A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76318</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220601&amp;DB=EPODOC&amp;CC=EP&amp;NR=4004693A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WESTING, Brandt M</creatorcontrib><creatorcontrib>AVERY, Keith P</creatorcontrib><creatorcontrib>DHANANI, Jamil</creatorcontrib><creatorcontrib>MAUDGALYA, Varun</creatorcontrib><creatorcontrib>JEONG, Minwoo</creatorcontrib><creatorcontrib>RUDCHENKO, Dmytro</creatorcontrib><creatorcontrib>KAUR, Harveen</creatorcontrib><creatorcontrib>PAEK, Timothy S</creatorcontrib><title>MACHINE-LEARNING BASED GESTURE RECOGNITION</title><description>The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNDydXT28PRz1fVxdQzy8_RzV3ByDHZ1UXB3DQ4JDXJVCHJ19nf38wzx9PfjYWBNS8wpTuWF0twMCm6uIc4euqkF-fGpxQWJyal5qSXxrgEmBgYmZpbGjobGRCgBAAdBJD4</recordid><startdate>20220601</startdate><enddate>20220601</enddate><creator>WESTING, Brandt M</creator><creator>AVERY, Keith P</creator><creator>DHANANI, Jamil</creator><creator>MAUDGALYA, Varun</creator><creator>JEONG, Minwoo</creator><creator>RUDCHENKO, Dmytro</creator><creator>KAUR, Harveen</creator><creator>PAEK, Timothy S</creator><scope>EVB</scope></search><sort><creationdate>20220601</creationdate><title>MACHINE-LEARNING BASED GESTURE RECOGNITION</title><author>WESTING, Brandt M ; AVERY, Keith P ; DHANANI, Jamil ; MAUDGALYA, Varun ; JEONG, Minwoo ; RUDCHENKO, Dmytro ; KAUR, Harveen ; PAEK, Timothy S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP4004693A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>WESTING, Brandt M</creatorcontrib><creatorcontrib>AVERY, Keith P</creatorcontrib><creatorcontrib>DHANANI, Jamil</creatorcontrib><creatorcontrib>MAUDGALYA, Varun</creatorcontrib><creatorcontrib>JEONG, Minwoo</creatorcontrib><creatorcontrib>RUDCHENKO, Dmytro</creatorcontrib><creatorcontrib>KAUR, Harveen</creatorcontrib><creatorcontrib>PAEK, Timothy S</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WESTING, Brandt M</au><au>AVERY, Keith P</au><au>DHANANI, Jamil</au><au>MAUDGALYA, Varun</au><au>JEONG, Minwoo</au><au>RUDCHENKO, Dmytro</au><au>KAUR, Harveen</au><au>PAEK, Timothy S</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>MACHINE-LEARNING BASED GESTURE RECOGNITION</title><date>2022-06-01</date><risdate>2022</risdate><abstract>The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP4004693A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title MACHINE-LEARNING BASED GESTURE RECOGNITION
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T22%3A15%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WESTING,%20Brandt%20M&rft.date=2022-06-01&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP4004693A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true