Modeling surface appearance from a single photograph using self-augmented convolutional neural networks
We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pa...
Gespeichert in:
Veröffentlicht in: | ACM transactions on graphics 2017-08, Vol.36 (4), p.1-11 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 11 |
---|---|
container_issue | 4 |
container_start_page | 1 |
container_title | ACM transactions on graphics |
container_volume | 36 |
creator | Li, Xiao Dong, Yue Peers, Pieter Tong, Xin |
description | We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pairs consisting of photographs of SVBRDF samples and corresponding reflectance parameters, is a difficult and arduous process. To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process. Starting from an initial approximative network obtained from a small set of labeled training pairs, we estimate provisional model parameters for each unlabeled training exemplar. Given this provisional reflectance estimate, we then synthesize a novel temporary
labeled
training pair by rendering the exact corresponding image under a new lighting condition. After refining the network using these additional training samples, we re-estimate the provisional model parameters for the unlabeled data and repeat the self-augmentation process until convergence. We demonstrate the efficacy of the proposed network structure on spatially varying wood, metals, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process. |
doi_str_mv | 10.1145/3072959.3073641 |
format | Article |
fullrecord | <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3072959_3073641</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_3072959_3073641</sourcerecordid><originalsourceid>FETCH-LOGICAL-c307t-5c7fc3c0e53f8b3198fea4e15398f80cc9b45991a15df0fdfd3d7e727229656c3</originalsourceid><addsrcrecordid>eNotkE1PwzAMhiMEEmNw5po_kM1pmrY5ookvaYgLnKssdbpC2lRJC-LfE0ZPz2tbtqyHkFsOG85zuRVQZkqqTaIocn5GVlzKkqWiOier1AQGAvgluYrxAwCKPC9WpH3xDbpuaGmcg9UGqR5H1EEPKdrge6ppTGOHdDz6ybdBj0c6x9MGOsv03PY4TNhQ44cv7-ap84N2dMA5nDB9-_AZr8mF1S7izcI1eX-4f9s9sf3r4_Pubs9M-nti0pTWCAMoha0OgqvKos6RS5FSBcaoQy6V4prLxoJtbCOaEsuszDJVyMKINdn-3zXBxxjQ1mPoeh1-ag71n6d68VQvnsQvcmFdeg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Modeling surface appearance from a single photograph using self-augmented convolutional neural networks</title><source>ACM Digital Library</source><creator>Li, Xiao ; Dong, Yue ; Peers, Pieter ; Tong, Xin</creator><creatorcontrib>Li, Xiao ; Dong, Yue ; Peers, Pieter ; Tong, Xin</creatorcontrib><description>We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pairs consisting of photographs of SVBRDF samples and corresponding reflectance parameters, is a difficult and arduous process. To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process. Starting from an initial approximative network obtained from a small set of labeled training pairs, we estimate provisional model parameters for each unlabeled training exemplar. Given this provisional reflectance estimate, we then synthesize a novel temporary
labeled
training pair by rendering the exact corresponding image under a new lighting condition. After refining the network using these additional training samples, we re-estimate the provisional model parameters for the unlabeled data and repeat the self-augmentation process until convergence. We demonstrate the efficacy of the proposed network structure on spatially varying wood, metals, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process.</description><identifier>ISSN: 0730-0301</identifier><identifier>EISSN: 1557-7368</identifier><identifier>DOI: 10.1145/3072959.3073641</identifier><language>eng</language><ispartof>ACM transactions on graphics, 2017-08, Vol.36 (4), p.1-11</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c307t-5c7fc3c0e53f8b3198fea4e15398f80cc9b45991a15df0fdfd3d7e727229656c3</citedby><cites>FETCH-LOGICAL-c307t-5c7fc3c0e53f8b3198fea4e15398f80cc9b45991a15df0fdfd3d7e727229656c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,778,782,27913,27914</link.rule.ids></links><search><creatorcontrib>Li, Xiao</creatorcontrib><creatorcontrib>Dong, Yue</creatorcontrib><creatorcontrib>Peers, Pieter</creatorcontrib><creatorcontrib>Tong, Xin</creatorcontrib><title>Modeling surface appearance from a single photograph using self-augmented convolutional neural networks</title><title>ACM transactions on graphics</title><description>We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pairs consisting of photographs of SVBRDF samples and corresponding reflectance parameters, is a difficult and arduous process. To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process. Starting from an initial approximative network obtained from a small set of labeled training pairs, we estimate provisional model parameters for each unlabeled training exemplar. Given this provisional reflectance estimate, we then synthesize a novel temporary
labeled
training pair by rendering the exact corresponding image under a new lighting condition. After refining the network using these additional training samples, we re-estimate the provisional model parameters for the unlabeled data and repeat the self-augmentation process until convergence. We demonstrate the efficacy of the proposed network structure on spatially varying wood, metals, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process.</description><issn>0730-0301</issn><issn>1557-7368</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNotkE1PwzAMhiMEEmNw5po_kM1pmrY5ookvaYgLnKssdbpC2lRJC-LfE0ZPz2tbtqyHkFsOG85zuRVQZkqqTaIocn5GVlzKkqWiOier1AQGAvgluYrxAwCKPC9WpH3xDbpuaGmcg9UGqR5H1EEPKdrge6ppTGOHdDz6ybdBj0c6x9MGOsv03PY4TNhQ44cv7-ap84N2dMA5nDB9-_AZr8mF1S7izcI1eX-4f9s9sf3r4_Pubs9M-nti0pTWCAMoha0OgqvKos6RS5FSBcaoQy6V4prLxoJtbCOaEsuszDJVyMKINdn-3zXBxxjQ1mPoeh1-ag71n6d68VQvnsQvcmFdeg</recordid><startdate>20170831</startdate><enddate>20170831</enddate><creator>Li, Xiao</creator><creator>Dong, Yue</creator><creator>Peers, Pieter</creator><creator>Tong, Xin</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20170831</creationdate><title>Modeling surface appearance from a single photograph using self-augmented convolutional neural networks</title><author>Li, Xiao ; Dong, Yue ; Peers, Pieter ; Tong, Xin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c307t-5c7fc3c0e53f8b3198fea4e15398f80cc9b45991a15df0fdfd3d7e727229656c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Xiao</creatorcontrib><creatorcontrib>Dong, Yue</creatorcontrib><creatorcontrib>Peers, Pieter</creatorcontrib><creatorcontrib>Tong, Xin</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Xiao</au><au>Dong, Yue</au><au>Peers, Pieter</au><au>Tong, Xin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Modeling surface appearance from a single photograph using self-augmented convolutional neural networks</atitle><jtitle>ACM transactions on graphics</jtitle><date>2017-08-31</date><risdate>2017</risdate><volume>36</volume><issue>4</issue><spage>1</spage><epage>11</epage><pages>1-11</pages><issn>0730-0301</issn><eissn>1557-7368</eissn><abstract>We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pairs consisting of photographs of SVBRDF samples and corresponding reflectance parameters, is a difficult and arduous process. To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process. Starting from an initial approximative network obtained from a small set of labeled training pairs, we estimate provisional model parameters for each unlabeled training exemplar. Given this provisional reflectance estimate, we then synthesize a novel temporary
labeled
training pair by rendering the exact corresponding image under a new lighting condition. After refining the network using these additional training samples, we re-estimate the provisional model parameters for the unlabeled data and repeat the self-augmentation process until convergence. We demonstrate the efficacy of the proposed network structure on spatially varying wood, metals, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process.</abstract><doi>10.1145/3072959.3073641</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0730-0301 |
ispartof | ACM transactions on graphics, 2017-08, Vol.36 (4), p.1-11 |
issn | 0730-0301 1557-7368 |
language | eng |
recordid | cdi_crossref_primary_10_1145_3072959_3073641 |
source | ACM Digital Library |
title | Modeling surface appearance from a single photograph using self-augmented convolutional neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T09%3A21%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Modeling%20surface%20appearance%20from%20a%20single%20photograph%20using%20self-augmented%20convolutional%20neural%20networks&rft.jtitle=ACM%20transactions%20on%20graphics&rft.au=Li,%20Xiao&rft.date=2017-08-31&rft.volume=36&rft.issue=4&rft.spage=1&rft.epage=11&rft.pages=1-11&rft.issn=0730-0301&rft.eissn=1557-7368&rft_id=info:doi/10.1145/3072959.3073641&rft_dat=%3Ccrossref%3E10_1145_3072959_3073641%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |