TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER

A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: MAITLEN CRAIG R, LATTA STEPHEN, ANDREWS ANTON O.A, MARTIN SHERIDAN, NOVAK CHRISTOPHER MICHAEL, LIU JAMES
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator MAITLEN CRAIG R
LATTA STEPHEN
ANDREWS ANTON O.A
MARTIN SHERIDAN
NOVAK CHRISTOPHER MICHAEL
LIU JAMES
description A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2013169682A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2013169682A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2013169682A13</originalsourceid><addsrcrecordid>eNrjZNAP8Q919lBw9HNRCPZ39nT0UXAOdQ1WcAxW8PQLCA0BUSH-Co4Kzv6-QK5rEA8Da1piTnEqL5TmZlB2cw1x9tBNLciPTy0uSExOzUstiQ8NNjIwNDY0szSzMHI0NCZOFQB-via4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER</title><source>esp@cenet</source><creator>MAITLEN CRAIG R ; LATTA STEPHEN ; ANDREWS ANTON O.A ; MARTIN SHERIDAN ; NOVAK CHRISTOPHER MICHAEL ; LIU JAMES</creator><creatorcontrib>MAITLEN CRAIG R ; LATTA STEPHEN ; ANDREWS ANTON O.A ; MARTIN SHERIDAN ; NOVAK CHRISTOPHER MICHAEL ; LIU JAMES</creatorcontrib><description>A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2013</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20130704&amp;DB=EPODOC&amp;CC=US&amp;NR=2013169682A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25546,76297</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20130704&amp;DB=EPODOC&amp;CC=US&amp;NR=2013169682A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>MAITLEN CRAIG R</creatorcontrib><creatorcontrib>LATTA STEPHEN</creatorcontrib><creatorcontrib>ANDREWS ANTON O.A</creatorcontrib><creatorcontrib>MARTIN SHERIDAN</creatorcontrib><creatorcontrib>NOVAK CHRISTOPHER MICHAEL</creatorcontrib><creatorcontrib>LIU JAMES</creatorcontrib><title>TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER</title><description>A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2013</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNAP8Q919lBw9HNRCPZ39nT0UXAOdQ1WcAxW8PQLCA0BUSH-Co4Kzv6-QK5rEA8Da1piTnEqL5TmZlB2cw1x9tBNLciPTy0uSExOzUstiQ8NNjIwNDY0szSzMHI0NCZOFQB-via4</recordid><startdate>20130704</startdate><enddate>20130704</enddate><creator>MAITLEN CRAIG R</creator><creator>LATTA STEPHEN</creator><creator>ANDREWS ANTON O.A</creator><creator>MARTIN SHERIDAN</creator><creator>NOVAK CHRISTOPHER MICHAEL</creator><creator>LIU JAMES</creator><scope>EVB</scope></search><sort><creationdate>20130704</creationdate><title>TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER</title><author>MAITLEN CRAIG R ; LATTA STEPHEN ; ANDREWS ANTON O.A ; MARTIN SHERIDAN ; NOVAK CHRISTOPHER MICHAEL ; LIU JAMES</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2013169682A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2013</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>MAITLEN CRAIG R</creatorcontrib><creatorcontrib>LATTA STEPHEN</creatorcontrib><creatorcontrib>ANDREWS ANTON O.A</creatorcontrib><creatorcontrib>MARTIN SHERIDAN</creatorcontrib><creatorcontrib>NOVAK CHRISTOPHER MICHAEL</creatorcontrib><creatorcontrib>LIU JAMES</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>MAITLEN CRAIG R</au><au>LATTA STEPHEN</au><au>ANDREWS ANTON O.A</au><au>MARTIN SHERIDAN</au><au>NOVAK CHRISTOPHER MICHAEL</au><au>LIU JAMES</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER</title><date>2013-07-04</date><risdate>2013</risdate><abstract>A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2013169682A1
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T09%3A56%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=MAITLEN%20CRAIG%20R&rft.date=2013-07-04&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2013169682A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true