ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition

In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance var...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2021-08, Vol.129 (8), p.2445-2473
Hauptverfasser: Baslamisli, Anil S., Das, Partha, Le, Hoang-An, Karaoglu, Sezer, Gevers, Theo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2473
container_issue 8
container_start_page 2445
container_title International journal of computer vision
container_volume 129
creator Baslamisli, Anil S.
Das, Partha
Le, Hoang-An
Karaoglu, Sezer
Gevers, Theo
description In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.
doi_str_mv 10.1007/s11263-021-01477-5
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2554667469</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A669605302</galeid><sourcerecordid>A669605302</sourcerecordid><originalsourceid>FETCH-LOGICAL-c436t-b3cd41864fd5aa4320cedcf9738706b80c0348aed611fa350b8377f6746368c53</originalsourceid><addsrcrecordid>eNp9kUtLAzEUhYMoWKt_wNWAKxfRm_eMu1pfBVHwsQ5pJjOmtJmaTEH_vakjiBu5iwOX79x74CB0TOCMAKjzRAiVDAMlGAhXCosdNCJCMUw4iF00gooCFrIi--ggpQUA0JKyEbp8fjO1D-2D6y-K2cq0rpiFPvqQvE3F_LO48cHh22iy1MUPXFw5263WXfK978Ih2mvMMrmjHx2j15vrl-kdvn-8nU0n99hyJns8Z7bmpJS8qYUxnFGwrrZNpVipQM5LsMB4aVwtCWkMEzAvmVKNVFwyWVrBxuhkuLuO3fvGpV4vuk0M-aWmQnC5JatMnQ1Ua5ZO-9B0fTQ2T-1W3nbBNT7vJ1JWEgQDmg2nfwyZ6d1H35pNSnr2_PSXpQNrY5dSdI1eR78y8VMT0Nsi9FCEzkXo7yL0NjcbTCnDoXXxN_c_ri8qOYhF</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554667469</pqid></control><display><type>article</type><title>ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Baslamisli, Anil S. ; Das, Partha ; Le, Hoang-An ; Karaoglu, Sezer ; Gevers, Theo</creator><creatorcontrib>Baslamisli, Anil S. ; Das, Partha ; Le, Hoang-An ; Karaoglu, Sezer ; Gevers, Theo</creatorcontrib><description>In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01477-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Computer Imaging ; Computer Science ; Datasets ; Decomposition ; Image Processing and Computer Vision ; Neural networks ; Pattern Recognition ; Pattern Recognition and Graphics ; Photometry ; Reflectance ; Shading ; Vision</subject><ispartof>International journal of computer vision, 2021-08, Vol.129 (8), p.2445-2473</ispartof><rights>The Author(s) 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c436t-b3cd41864fd5aa4320cedcf9738706b80c0348aed611fa350b8377f6746368c53</citedby><cites>FETCH-LOGICAL-c436t-b3cd41864fd5aa4320cedcf9738706b80c0348aed611fa350b8377f6746368c53</cites><orcidid>0000-0002-4592-5379</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-021-01477-5$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-021-01477-5$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Baslamisli, Anil S.</creatorcontrib><creatorcontrib>Das, Partha</creatorcontrib><creatorcontrib>Le, Hoang-An</creatorcontrib><creatorcontrib>Karaoglu, Sezer</creatorcontrib><creatorcontrib>Gevers, Theo</creatorcontrib><title>ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Decomposition</subject><subject>Image Processing and Computer Vision</subject><subject>Neural networks</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Photometry</subject><subject>Reflectance</subject><subject>Shading</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kUtLAzEUhYMoWKt_wNWAKxfRm_eMu1pfBVHwsQ5pJjOmtJmaTEH_vakjiBu5iwOX79x74CB0TOCMAKjzRAiVDAMlGAhXCosdNCJCMUw4iF00gooCFrIi--ggpQUA0JKyEbp8fjO1D-2D6y-K2cq0rpiFPvqQvE3F_LO48cHh22iy1MUPXFw5263WXfK978Ih2mvMMrmjHx2j15vrl-kdvn-8nU0n99hyJns8Z7bmpJS8qYUxnFGwrrZNpVipQM5LsMB4aVwtCWkMEzAvmVKNVFwyWVrBxuhkuLuO3fvGpV4vuk0M-aWmQnC5JatMnQ1Ua5ZO-9B0fTQ2T-1W3nbBNT7vJ1JWEgQDmg2nfwyZ6d1H35pNSnr2_PSXpQNrY5dSdI1eR78y8VMT0Nsi9FCEzkXo7yL0NjcbTCnDoXXxN_c_ri8qOYhF</recordid><startdate>20210801</startdate><enddate>20210801</enddate><creator>Baslamisli, Anil S.</creator><creator>Das, Partha</creator><creator>Le, Hoang-An</creator><creator>Karaoglu, Sezer</creator><creator>Gevers, Theo</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-4592-5379</orcidid></search><sort><creationdate>20210801</creationdate><title>ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition</title><author>Baslamisli, Anil S. ; Das, Partha ; Le, Hoang-An ; Karaoglu, Sezer ; Gevers, Theo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c436t-b3cd41864fd5aa4320cedcf9738706b80c0348aed611fa350b8377f6746368c53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Decomposition</topic><topic>Image Processing and Computer Vision</topic><topic>Neural networks</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Photometry</topic><topic>Reflectance</topic><topic>Shading</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Baslamisli, Anil S.</creatorcontrib><creatorcontrib>Das, Partha</creatorcontrib><creatorcontrib>Le, Hoang-An</creatorcontrib><creatorcontrib>Karaoglu, Sezer</creatorcontrib><creatorcontrib>Gevers, Theo</creatorcontrib><collection>Springer Nature OA/Free Journals</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Baslamisli, Anil S.</au><au>Das, Partha</au><au>Le, Hoang-An</au><au>Karaoglu, Sezer</au><au>Gevers, Theo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-08-01</date><risdate>2021</risdate><volume>129</volume><issue>8</issue><spage>2445</spage><epage>2473</epage><pages>2445-2473</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01477-5</doi><tpages>29</tpages><orcidid>https://orcid.org/0000-0002-4592-5379</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2021-08, Vol.129 (8), p.2445-2473
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2554667469
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Artificial neural networks
Computer Imaging
Computer Science
Datasets
Decomposition
Image Processing and Computer Vision
Neural networks
Pattern Recognition
Pattern Recognition and Graphics
Photometry
Reflectance
Shading
Vision
title ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T16%3A36%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ShadingNet:%20Image%20Intrinsics%20by%20Fine-Grained%20Shading%20Decomposition&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Baslamisli,%20Anil%20S.&rft.date=2021-08-01&rft.volume=129&rft.issue=8&rft.spage=2445&rft.epage=2473&rft.pages=2445-2473&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01477-5&rft_dat=%3Cgale_proqu%3EA669605302%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2554667469&rft_id=info:pmid/&rft_galeid=A669605302&rfr_iscdi=true