Ultrawide Foveated Video Extrapolation
Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artif...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in signal processing 2011-04, Vol.5 (2), p.321-334 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 334 |
---|---|
container_issue | 2 |
container_start_page | 321 |
container_title | IEEE journal of selected topics in signal processing |
container_volume | 5 |
creator | Avraham, Tamar Schechner, Yoav Y |
description | Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study. |
doi_str_mv | 10.1109/JSTSP.2010.2065213 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_857273273</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5545371</ieee_id><sourcerecordid>869817712</sourcerecordid><originalsourceid>FETCH-LOGICAL-c370t-da399747b0733c0efa681c9b686cf3eda6a0a6ca0f8a8b3b3cde9a748f4e18a3</originalsourceid><addsrcrecordid>eNpdkE1LAzEQhoMoWKt_QC_Fg562Jpvvo5TWDwoKrV7DbHYWtmybutla_femtngQBmZe5nmH4SXkktEhY9TePc_ms9dhTpPOqZI540ekx6xgGRVGHO9mnmdCSn5KzmJcUCq1YqJHbt6aroVtXeJgEj4ROiwH70mFwfgrLdahga4Oq3NyUkET8eLQ-2Q-Gc9Hj9n05eFpdD_NPNe0y0rg1mqhC6o59xQrUIZ5WyijfMWxBAUUlAdaGTAFL7gv0YIWphLIDPA-ud2fXbfhY4Oxc8s6emwaWGHYRGeUNUxrlify-h-5CJt2lX5zRupc81QJyveQb0OMLVZu3dZLaL8do26Xm_vNze1yc4fckulqb6oR8c8gpZBcM_4DTsFozw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>857273273</pqid></control><display><type>article</type><title>Ultrawide Foveated Video Extrapolation</title><source>IEEE Electronic Library (IEL)</source><creator>Avraham, Tamar ; Schechner, Yoav Y</creator><creatorcontrib>Avraham, Tamar ; Schechner, Yoav Y</creatorcontrib><description>Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.</description><identifier>ISSN: 1932-4553</identifier><identifier>EISSN: 1941-0484</identifier><identifier>DOI: 10.1109/JSTSP.2010.2065213</identifier><identifier>CODEN: IJSTGY</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Computer displays ; Display technology ; Extrapolation ; Field of view ; Filling ; Fovea ; foveated vision ; Humans ; image and video completion ; Image generation ; Lighting ; Raw ; Spatial resolution ; Tasks ; Video compression ; video extrapolation ; Visual ; Visual system</subject><ispartof>IEEE journal of selected topics in signal processing, 2011-04, Vol.5 (2), p.321-334</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Apr 2011</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c370t-da399747b0733c0efa681c9b686cf3eda6a0a6ca0f8a8b3b3cde9a748f4e18a3</citedby><cites>FETCH-LOGICAL-c370t-da399747b0733c0efa681c9b686cf3eda6a0a6ca0f8a8b3b3cde9a748f4e18a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5545371$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5545371$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Avraham, Tamar</creatorcontrib><creatorcontrib>Schechner, Yoav Y</creatorcontrib><title>Ultrawide Foveated Video Extrapolation</title><title>IEEE journal of selected topics in signal processing</title><addtitle>JSTSP</addtitle><description>Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.</description><subject>Computer displays</subject><subject>Display technology</subject><subject>Extrapolation</subject><subject>Field of view</subject><subject>Filling</subject><subject>Fovea</subject><subject>foveated vision</subject><subject>Humans</subject><subject>image and video completion</subject><subject>Image generation</subject><subject>Lighting</subject><subject>Raw</subject><subject>Spatial resolution</subject><subject>Tasks</subject><subject>Video compression</subject><subject>video extrapolation</subject><subject>Visual</subject><subject>Visual system</subject><issn>1932-4553</issn><issn>1941-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1LAzEQhoMoWKt_QC_Fg562Jpvvo5TWDwoKrV7DbHYWtmybutla_femtngQBmZe5nmH4SXkktEhY9TePc_ms9dhTpPOqZI540ekx6xgGRVGHO9mnmdCSn5KzmJcUCq1YqJHbt6aroVtXeJgEj4ROiwH70mFwfgrLdahga4Oq3NyUkET8eLQ-2Q-Gc9Hj9n05eFpdD_NPNe0y0rg1mqhC6o59xQrUIZ5WyijfMWxBAUUlAdaGTAFL7gv0YIWphLIDPA-ud2fXbfhY4Oxc8s6emwaWGHYRGeUNUxrlify-h-5CJt2lX5zRupc81QJyveQb0OMLVZu3dZLaL8do26Xm_vNze1yc4fckulqb6oR8c8gpZBcM_4DTsFozw</recordid><startdate>201104</startdate><enddate>201104</enddate><creator>Avraham, Tamar</creator><creator>Schechner, Yoav Y</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>201104</creationdate><title>Ultrawide Foveated Video Extrapolation</title><author>Avraham, Tamar ; Schechner, Yoav Y</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c370t-da399747b0733c0efa681c9b686cf3eda6a0a6ca0f8a8b3b3cde9a748f4e18a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Computer displays</topic><topic>Display technology</topic><topic>Extrapolation</topic><topic>Field of view</topic><topic>Filling</topic><topic>Fovea</topic><topic>foveated vision</topic><topic>Humans</topic><topic>image and video completion</topic><topic>Image generation</topic><topic>Lighting</topic><topic>Raw</topic><topic>Spatial resolution</topic><topic>Tasks</topic><topic>Video compression</topic><topic>video extrapolation</topic><topic>Visual</topic><topic>Visual system</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Avraham, Tamar</creatorcontrib><creatorcontrib>Schechner, Yoav Y</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal of selected topics in signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Avraham, Tamar</au><au>Schechner, Yoav Y</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Ultrawide Foveated Video Extrapolation</atitle><jtitle>IEEE journal of selected topics in signal processing</jtitle><stitle>JSTSP</stitle><date>2011-04</date><risdate>2011</risdate><volume>5</volume><issue>2</issue><spage>321</spage><epage>334</epage><pages>321-334</pages><issn>1932-4553</issn><eissn>1941-0484</eissn><coden>IJSTGY</coden><abstract>Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSTSP.2010.2065213</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1932-4553 |
ispartof | IEEE journal of selected topics in signal processing, 2011-04, Vol.5 (2), p.321-334 |
issn | 1932-4553 1941-0484 |
language | eng |
recordid | cdi_proquest_journals_857273273 |
source | IEEE Electronic Library (IEL) |
subjects | Computer displays Display technology Extrapolation Field of view Filling Fovea foveated vision Humans image and video completion Image generation Lighting Raw Spatial resolution Tasks Video compression video extrapolation Visual Visual system |
title | Ultrawide Foveated Video Extrapolation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T15%3A10%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Ultrawide%20Foveated%20Video%20Extrapolation&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20signal%20processing&rft.au=Avraham,%20Tamar&rft.date=2011-04&rft.volume=5&rft.issue=2&rft.spage=321&rft.epage=334&rft.pages=321-334&rft.issn=1932-4553&rft.eissn=1941-0484&rft.coden=IJSTGY&rft_id=info:doi/10.1109/JSTSP.2010.2065213&rft_dat=%3Cproquest_RIE%3E869817712%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=857273273&rft_id=info:pmid/&rft_ieee_id=5545371&rfr_iscdi=true |