Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks
In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images, exploiting the rich image information from wide-angle cameras and/or temporal image sequences. Photographs inherently encode information about the lighting of the scene in the form of shading...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2023-04, Vol.131 (4), p.1060-1072 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1072 |
---|---|
container_issue | 4 |
container_start_page | 1060 |
container_title | International journal of computer vision |
container_volume | 131 |
creator | Lee, Haebom Homeyer, Christian Herzog, Robert Rexilius, Jan Rother, Carsten |
description | In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images, exploiting the rich image information from wide-angle cameras and/or temporal image sequences. Photographs inherently encode information about the lighting of the scene in the form of shading and shadows. Recovering the lighting is an inverse rendering problem and as that ill-posed. Recent research based on deep neural networks has shown promising results for estimating light from a single image, but with shortcomings in robustness. We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domains of an image sequence. For this task, we introduce a transformer architecture that is trained in an end-2-end fashion without any statistical post-processing as required by previous work. Thereby, we propose a positional encoding that takes into account camera alignment and ego-motion estimation to globally register the individual estimates when computing attention between visual words. We show that our method leads to improved lighting estimation while requiring fewer hyperparameters compared to the state of the art. |
doi_str_mv | 10.1007/s11263-022-01725-2 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2781943718</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A739531302</galeid><sourcerecordid>A739531302</sourcerecordid><originalsourceid>FETCH-LOGICAL-c387t-1329707c8a26e317ade155b20942298e1d2d5810dfb53de6bfb15b1424babf2b3</originalsourceid><addsrcrecordid>eNp9kcFq3DAQhkVJoZu0L9CTIacclGpG1so-LiFJF5YGuptLL0K2x6rTtbWRtLR9-2rrQMmljGBAfJ_0w8_YRxDXIIT-FAFwKblA5AI0Ko5v2AKUlhxKoc7YQtQouFrW8I6dx_gkhMAK5YJ92x5sGjzf0Xjwwe6Lh2PqvA_FZnDf0zC5YuVcIHeCpiKf9WgdFVt6PtLUUiwe4wnaBTvF3oeRQvGF0k8ffsT37G1v95E-vOwL9nh3u7v5zDcP9-ub1Ya3stKJg8RaC91WFpckQduOQKkGRV0i1hVBh52qQHR9o2RHy6ZvQDVQYtnYpsdGXrDL-d1D8DlVTObJH8OUvzSoK6hLqaHK1PVMObsnM0y9T8G2eToah9ZP1A_5fqVlrSRIgVm4eiVkJtGv5OwxRrPefn3N4sy2wccYqDeHMIw2_DYgzKkgMxdkckHmb0HmJMlZihmeHIV_uf9j_QFBNpIr</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2781943718</pqid></control><display><type>article</type><title>Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks</title><source>SpringerLink Journals - AutoHoldings</source><creator>Lee, Haebom ; Homeyer, Christian ; Herzog, Robert ; Rexilius, Jan ; Rother, Carsten</creator><creatorcontrib>Lee, Haebom ; Homeyer, Christian ; Herzog, Robert ; Rexilius, Jan ; Rother, Carsten</creatorcontrib><description>In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images, exploiting the rich image information from wide-angle cameras and/or temporal image sequences. Photographs inherently encode information about the lighting of the scene in the form of shading and shadows. Recovering the lighting is an inverse rendering problem and as that ill-posed. Recent research based on deep neural networks has shown promising results for estimating light from a single image, but with shortcomings in robustness. We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domains of an image sequence. For this task, we introduce a transformer architecture that is trained in an end-2-end fashion without any statistical post-processing as required by previous work. Thereby, we propose a positional encoding that takes into account camera alignment and ego-motion estimation to globally register the individual estimates when computing attention between visual words. We show that our method leads to improved lighting estimation while requiring fewer hyperparameters compared to the state of the art.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-022-01725-2</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Artificial neural networks ; Cameras ; Computer Imaging ; Computer Science ; Estimates ; Estimation ; Image Processing and Computer Vision ; Image sequencing ; Lighting ; Motion simulation ; Neural networks ; Pattern Recognition ; Pattern Recognition and Graphics ; Special Issue on Pattern Recognition (DAGM GCPR 2021) ; Vision</subject><ispartof>International journal of computer vision, 2023-04, Vol.131 (4), p.1060-1072</ispartof><rights>The Author(s) 2022</rights><rights>COPYRIGHT 2023 Springer</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c387t-1329707c8a26e317ade155b20942298e1d2d5810dfb53de6bfb15b1424babf2b3</cites><orcidid>0000-0001-9250-3526</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-022-01725-2$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-022-01725-2$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Lee, Haebom</creatorcontrib><creatorcontrib>Homeyer, Christian</creatorcontrib><creatorcontrib>Herzog, Robert</creatorcontrib><creatorcontrib>Rexilius, Jan</creatorcontrib><creatorcontrib>Rother, Carsten</creatorcontrib><title>Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images, exploiting the rich image information from wide-angle cameras and/or temporal image sequences. Photographs inherently encode information about the lighting of the scene in the form of shading and shadows. Recovering the lighting is an inverse rendering problem and as that ill-posed. Recent research based on deep neural networks has shown promising results for estimating light from a single image, but with shortcomings in robustness. We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domains of an image sequence. For this task, we introduce a transformer architecture that is trained in an end-2-end fashion without any statistical post-processing as required by previous work. Thereby, we propose a positional encoding that takes into account camera alignment and ego-motion estimation to globally register the individual estimates when computing attention between visual words. We show that our method leads to improved lighting estimation while requiring fewer hyperparameters compared to the state of the art.</description><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Cameras</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Estimates</subject><subject>Estimation</subject><subject>Image Processing and Computer Vision</subject><subject>Image sequencing</subject><subject>Lighting</subject><subject>Motion simulation</subject><subject>Neural networks</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Special Issue on Pattern Recognition (DAGM GCPR 2021)</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kcFq3DAQhkVJoZu0L9CTIacclGpG1so-LiFJF5YGuptLL0K2x6rTtbWRtLR9-2rrQMmljGBAfJ_0w8_YRxDXIIT-FAFwKblA5AI0Ko5v2AKUlhxKoc7YQtQouFrW8I6dx_gkhMAK5YJ92x5sGjzf0Xjwwe6Lh2PqvA_FZnDf0zC5YuVcIHeCpiKf9WgdFVt6PtLUUiwe4wnaBTvF3oeRQvGF0k8ffsT37G1v95E-vOwL9nh3u7v5zDcP9-ub1Ya3stKJg8RaC91WFpckQduOQKkGRV0i1hVBh52qQHR9o2RHy6ZvQDVQYtnYpsdGXrDL-d1D8DlVTObJH8OUvzSoK6hLqaHK1PVMObsnM0y9T8G2eToah9ZP1A_5fqVlrSRIgVm4eiVkJtGv5OwxRrPefn3N4sy2wccYqDeHMIw2_DYgzKkgMxdkckHmb0HmJMlZihmeHIV_uf9j_QFBNpIr</recordid><startdate>20230401</startdate><enddate>20230401</enddate><creator>Lee, Haebom</creator><creator>Homeyer, Christian</creator><creator>Herzog, Robert</creator><creator>Rexilius, Jan</creator><creator>Rother, Carsten</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9250-3526</orcidid></search><sort><creationdate>20230401</creationdate><title>Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks</title><author>Lee, Haebom ; Homeyer, Christian ; Herzog, Robert ; Rexilius, Jan ; Rother, Carsten</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c387t-1329707c8a26e317ade155b20942298e1d2d5810dfb53de6bfb15b1424babf2b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Cameras</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Estimates</topic><topic>Estimation</topic><topic>Image Processing and Computer Vision</topic><topic>Image sequencing</topic><topic>Lighting</topic><topic>Motion simulation</topic><topic>Neural networks</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Special Issue on Pattern Recognition (DAGM GCPR 2021)</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Haebom</creatorcontrib><creatorcontrib>Homeyer, Christian</creatorcontrib><creatorcontrib>Herzog, Robert</creatorcontrib><creatorcontrib>Rexilius, Jan</creatorcontrib><creatorcontrib>Rother, Carsten</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Haebom</au><au>Homeyer, Christian</au><au>Herzog, Robert</au><au>Rexilius, Jan</au><au>Rother, Carsten</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2023-04-01</date><risdate>2023</risdate><volume>131</volume><issue>4</issue><spage>1060</spage><epage>1072</epage><pages>1060-1072</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images, exploiting the rich image information from wide-angle cameras and/or temporal image sequences. Photographs inherently encode information about the lighting of the scene in the form of shading and shadows. Recovering the lighting is an inverse rendering problem and as that ill-posed. Recent research based on deep neural networks has shown promising results for estimating light from a single image, but with shortcomings in robustness. We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domains of an image sequence. For this task, we introduce a transformer architecture that is trained in an end-2-end fashion without any statistical post-processing as required by previous work. Thereby, we propose a positional encoding that takes into account camera alignment and ego-motion estimation to globally register the individual estimates when computing attention between visual words. We show that our method leads to improved lighting estimation while requiring fewer hyperparameters compared to the state of the art.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-022-01725-2</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-9250-3526</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2023-04, Vol.131 (4), p.1060-1072 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_2781943718 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Artificial neural networks Cameras Computer Imaging Computer Science Estimates Estimation Image Processing and Computer Vision Image sequencing Lighting Motion simulation Neural networks Pattern Recognition Pattern Recognition and Graphics Special Issue on Pattern Recognition (DAGM GCPR 2021) Vision |
title | Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences Using Transformer Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T18%3A07%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Spatio-Temporal%20Outdoor%20Lighting%20Aggregation%20on%20Image%20Sequences%20Using%20Transformer%20Networks&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Lee,%20Haebom&rft.date=2023-04-01&rft.volume=131&rft.issue=4&rft.spage=1060&rft.epage=1072&rft.pages=1060-1072&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-022-01725-2&rft_dat=%3Cgale_proqu%3EA739531302%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2781943718&rft_id=info:pmid/&rft_galeid=A739531302&rfr_iscdi=true |