Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices

A Near-Eye-Display (NED) devices that translates combinations of user gaze direction and predetermined facial gestures into user input instructions. The NED device includes an eye tracking system and a display that renders computer-generated images within a user's field-of-view. The eye trackin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: PACE, Maria Esther, ORTIZ EGEA, Sergio
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator PACE, Maria Esther
ORTIZ EGEA, Sergio
description A Near-Eye-Display (NED) devices that translates combinations of user gaze direction and predetermined facial gestures into user input instructions. The NED device includes an eye tracking system and a display that renders computer-generated images within a user's field-of-view. The eye tracking system may continually track the user's eye movements with a high degree of accuracy to identify specific computer-generated images that a user is focused on. The eye tracking system may also identify various facial gestures such as, for example, left-eye blinks and/or right-eye blinks that are performed while the specific computer-generated images are being focused on. In this way, NED devices are enabled to identify combinations of user gaze direction and predetermined facial gestures and to translate these identified combinations into user input instructions that correspond to specific computer-generated images.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2021041949A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2021041949A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2021041949A13</originalsourceid><addsrcrecordid>eNqNjEGKwlAQRLNxIaN3aJiNswgk6sblYKLjRgbUtbQ_FfkQ-4fun4F4C2-sgx7ATVXxqKphctsrizUcvZxpGS4nL48cxCjUdDAorfkKKrzC_XNiqehXUSFCL15Q0Yqd54bWsNgpjLzE8JxupO3iQy1q556vdVDagjUte6SFt7bhnibbsviiAn_ewUbJoObGMH75R_K5KvfLnxRtOMJadhDE42E3zaZ5Ns8X88V3PnuvdQeAFU8r</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices</title><source>esp@cenet</source><creator>PACE, Maria Esther ; ORTIZ EGEA, Sergio</creator><creatorcontrib>PACE, Maria Esther ; ORTIZ EGEA, Sergio</creatorcontrib><description>A Near-Eye-Display (NED) devices that translates combinations of user gaze direction and predetermined facial gestures into user input instructions. The NED device includes an eye tracking system and a display that renders computer-generated images within a user's field-of-view. The eye tracking system may continually track the user's eye movements with a high degree of accuracy to identify specific computer-generated images that a user is focused on. The eye tracking system may also identify various facial gestures such as, for example, left-eye blinks and/or right-eye blinks that are performed while the specific computer-generated images are being focused on. In this way, NED devices are enabled to identify combinations of user gaze direction and predetermined facial gestures and to translate these identified combinations into user input instructions that correspond to specific computer-generated images.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS ; OPTICS ; PHYSICS</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210211&amp;DB=EPODOC&amp;CC=US&amp;NR=2021041949A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210211&amp;DB=EPODOC&amp;CC=US&amp;NR=2021041949A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>PACE, Maria Esther</creatorcontrib><creatorcontrib>ORTIZ EGEA, Sergio</creatorcontrib><title>Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices</title><description>A Near-Eye-Display (NED) devices that translates combinations of user gaze direction and predetermined facial gestures into user input instructions. The NED device includes an eye tracking system and a display that renders computer-generated images within a user's field-of-view. The eye tracking system may continually track the user's eye movements with a high degree of accuracy to identify specific computer-generated images that a user is focused on. The eye tracking system may also identify various facial gestures such as, for example, left-eye blinks and/or right-eye blinks that are performed while the specific computer-generated images are being focused on. In this way, NED devices are enabled to identify combinations of user gaze direction and predetermined facial gestures and to translate these identified combinations into user input instructions that correspond to specific computer-generated images.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS</subject><subject>OPTICS</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjEGKwlAQRLNxIaN3aJiNswgk6sblYKLjRgbUtbQ_FfkQ-4fun4F4C2-sgx7ATVXxqKphctsrizUcvZxpGS4nL48cxCjUdDAorfkKKrzC_XNiqehXUSFCL15Q0Yqd54bWsNgpjLzE8JxupO3iQy1q556vdVDagjUte6SFt7bhnibbsviiAn_ewUbJoObGMH75R_K5KvfLnxRtOMJadhDE42E3zaZ5Ns8X88V3PnuvdQeAFU8r</recordid><startdate>20210211</startdate><enddate>20210211</enddate><creator>PACE, Maria Esther</creator><creator>ORTIZ EGEA, Sergio</creator><scope>EVB</scope></search><sort><creationdate>20210211</creationdate><title>Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices</title><author>PACE, Maria Esther ; ORTIZ EGEA, Sergio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2021041949A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS</topic><topic>OPTICS</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>PACE, Maria Esther</creatorcontrib><creatorcontrib>ORTIZ EGEA, Sergio</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>PACE, Maria Esther</au><au>ORTIZ EGEA, Sergio</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices</title><date>2021-02-11</date><risdate>2021</risdate><abstract>A Near-Eye-Display (NED) devices that translates combinations of user gaze direction and predetermined facial gestures into user input instructions. The NED device includes an eye tracking system and a display that renders computer-generated images within a user's field-of-view. The eye tracking system may continually track the user's eye movements with a high degree of accuracy to identify specific computer-generated images that a user is focused on. The eye tracking system may also identify various facial gestures such as, for example, left-eye blinks and/or right-eye blinks that are performed while the specific computer-generated images are being focused on. In this way, NED devices are enabled to identify combinations of user gaze direction and predetermined facial gestures and to translate these identified combinations into user input instructions that correspond to specific computer-generated images.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2021041949A1
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
OPTICS
PHYSICS
title Translating Combinations of User Gaze Direction and Predetermined Facial Gestures into User Input Instructions for Near-Eye-Display (NED) Devices
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T09%3A20%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=PACE,%20Maria%20Esther&rft.date=2021-02-11&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2021041949A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true