Focus and context in mixed reality by modulating first order salient features

We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automaticall...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mendez, Erick, Feiner, Steven, Schmalstieg, Dieter
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 243
container_issue
container_start_page 232
container_title
container_volume
creator Mendez, Erick
Feiner, Steven
Schmalstieg, Dieter
description We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.
doi_str_mv 10.5555/1894345.1894374
format Conference Proceeding
fullrecord <record><control><sourceid>acm</sourceid><recordid>TN_cdi_acm_books_10_5555_1894345_1894374_brief</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>acm_books_10_5555_1894345_1894374</sourcerecordid><originalsourceid>FETCH-LOGICAL-a157t-2afcfe39e60acfaa66e2ce43409d34b702bed2f62ba9c3bf48b6a8c8468ca29a3</originalsourceid><addsrcrecordid>eNqNj01LxDAYhAMiqOuevebopbVN0rQ5yuKqsOJFz-FN8kai_YAkhd1_b9X-AOcyl5lhHkJu6qpsFt3VnRJcNOWvt-KMXHEpWM0bwdUF2ab0WS0SQrYdvyQv-8nOicLoqJ3GjMdMw0iHcERHI0If8omaEx0mN_eQw_hBfYgp0yk6jDQtARwz9Qh5jpiuybmHPuF29Q153z-87Z6Kw-vj8-7-UEDdtLlg4K1HrlBWYD2AlMgsLq8r5bgwbcUMOuYlM6AsN150RkJnOyE7C0wB35DybxfsoM00fSVdV_oHX6_4esXXJgb0S-H2nwX-DetFXZk</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Focus and context in mixed reality by modulating first order salient features</title><source>Springer Books</source><creator>Mendez, Erick ; Feiner, Steven ; Schmalstieg, Dieter</creator><contributor>Taylor, Robyn ; Krüger, Antonio ; Boulanger, Pierre ; Olivier, Patrick</contributor><creatorcontrib>Mendez, Erick ; Feiner, Steven ; Schmalstieg, Dieter ; Taylor, Robyn ; Krüger, Antonio ; Boulanger, Pierre ; Olivier, Patrick</creatorcontrib><description>We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.</description><identifier>ISBN: 3642135439</identifier><identifier>ISBN: 9783642135439</identifier><identifier>DOI: 10.5555/1894345.1894374</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer-Verlag</publisher><subject>Computing methodologies ; Computing methodologies -- Artificial intelligence ; Computing methodologies -- Artificial intelligence -- Computer vision ; Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks ; Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks -- Scene understanding</subject><ispartof>Proceedings of the 10th international conference on Smart graphics, 2010, p.232-243</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,310,778,782,787,788,27908</link.rule.ids></links><search><contributor>Taylor, Robyn</contributor><contributor>Krüger, Antonio</contributor><contributor>Boulanger, Pierre</contributor><contributor>Olivier, Patrick</contributor><creatorcontrib>Mendez, Erick</creatorcontrib><creatorcontrib>Feiner, Steven</creatorcontrib><creatorcontrib>Schmalstieg, Dieter</creatorcontrib><title>Focus and context in mixed reality by modulating first order salient features</title><title>Proceedings of the 10th international conference on Smart graphics</title><description>We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.</description><subject>Computing methodologies</subject><subject>Computing methodologies -- Artificial intelligence</subject><subject>Computing methodologies -- Artificial intelligence -- Computer vision</subject><subject>Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks</subject><subject>Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks -- Scene understanding</subject><isbn>3642135439</isbn><isbn>9783642135439</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2010</creationdate><recordtype>conference_proceeding</recordtype><sourceid/><recordid>eNqNj01LxDAYhAMiqOuevebopbVN0rQ5yuKqsOJFz-FN8kai_YAkhd1_b9X-AOcyl5lhHkJu6qpsFt3VnRJcNOWvt-KMXHEpWM0bwdUF2ab0WS0SQrYdvyQv-8nOicLoqJ3GjMdMw0iHcERHI0If8omaEx0mN_eQw_hBfYgp0yk6jDQtARwz9Qh5jpiuybmHPuF29Q153z-87Z6Kw-vj8-7-UEDdtLlg4K1HrlBWYD2AlMgsLq8r5bgwbcUMOuYlM6AsN150RkJnOyE7C0wB35DybxfsoM00fSVdV_oHX6_4esXXJgb0S-H2nwX-DetFXZk</recordid><startdate>20100624</startdate><enddate>20100624</enddate><creator>Mendez, Erick</creator><creator>Feiner, Steven</creator><creator>Schmalstieg, Dieter</creator><general>Springer-Verlag</general><scope/></search><sort><creationdate>20100624</creationdate><title>Focus and context in mixed reality by modulating first order salient features</title><author>Mendez, Erick ; Feiner, Steven ; Schmalstieg, Dieter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a157t-2afcfe39e60acfaa66e2ce43409d34b702bed2f62ba9c3bf48b6a8c8468ca29a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Computing methodologies</topic><topic>Computing methodologies -- Artificial intelligence</topic><topic>Computing methodologies -- Artificial intelligence -- Computer vision</topic><topic>Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks</topic><topic>Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks -- Scene understanding</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mendez, Erick</creatorcontrib><creatorcontrib>Feiner, Steven</creatorcontrib><creatorcontrib>Schmalstieg, Dieter</creatorcontrib></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mendez, Erick</au><au>Feiner, Steven</au><au>Schmalstieg, Dieter</au><au>Taylor, Robyn</au><au>Krüger, Antonio</au><au>Boulanger, Pierre</au><au>Olivier, Patrick</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Focus and context in mixed reality by modulating first order salient features</atitle><btitle>Proceedings of the 10th international conference on Smart graphics</btitle><date>2010-06-24</date><risdate>2010</risdate><spage>232</spage><epage>243</epage><pages>232-243</pages><isbn>3642135439</isbn><isbn>9783642135439</isbn><abstract>We present a technique for dynamically directing a viewer's attention to a focus object by analyzing and modulating bottom-up salient features of a video feed. Rather than applying a static modulation strategy, we inspect the original image's saliency map, and modify the image automatically to favor the focus object. Image fragments are adaptively darkened, lightened and manipulated in hue according to local contrast information rather than global parameters. The goal is to suggest rather than force the attention of the user towards a specific location. The technique's goal is to apply only minimal changes to an image, while achieving a desired difference of saliency between focus and context regions of the image. Our technique exhibits temporal and spatial coherence and runs at interactive frame rates using GPU shaders. We present several application examples from the field of Mixed Reality, or more precisely Mediated Reality.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer-Verlag</pub><doi>10.5555/1894345.1894374</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISBN: 3642135439
ispartof Proceedings of the 10th international conference on Smart graphics, 2010, p.232-243
issn
language eng
recordid cdi_acm_books_10_5555_1894345_1894374_brief
source Springer Books
subjects Computing methodologies
Computing methodologies -- Artificial intelligence
Computing methodologies -- Artificial intelligence -- Computer vision
Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks
Computing methodologies -- Artificial intelligence -- Computer vision -- Computer vision tasks -- Scene understanding
title Focus and context in mixed reality by modulating first order salient features
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T13%3A01%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Focus%20and%20context%20in%20mixed%20reality%20by%20modulating%20first%20order%20salient%20features&rft.btitle=Proceedings%20of%20the%2010th%20international%20conference%20on%20Smart%20graphics&rft.au=Mendez,%20Erick&rft.date=2010-06-24&rft.spage=232&rft.epage=243&rft.pages=232-243&rft.isbn=3642135439&rft.isbn_list=9783642135439&rft_id=info:doi/10.5555/1894345.1894374&rft_dat=%3Cacm%3Eacm_books_10_5555_1894345_1894374%3C/acm%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true