Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation

The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tajima, Daichi, Kanamori, Yoshihiro, Endo, Yuki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Tajima, Daichi
Kanamori, Yoshihiro
Endo, Yuki
description The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.
doi_str_mv 10.48550/arxiv.2110.07272
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2110_07272</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2110_07272</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-b93249a285164bdb4773ea10f06ff1fde38016268dadea1dfcc02843d7d992893</originalsourceid><addsrcrecordid>eNpNj8tKw0AYhWfjQqoP4Mp5gdS5ZS7uarVWqAilUHAT_mRmmoHJpKSJ2rc3bV24OnA434EPoTtKpkLnOXmA7id8TRkdC6KYYtfoc-1i2NV9SDu8HBpIBxwS7muHtyHaR_zeprYaInR4McSYPbX2eNnhf-B36Gv83DYwojML-x760KYbdOUhHtztX07QZvGymS-z1cfr23y2ykAqlpWGM2GA6ZxKUdpSKMUdUOKJ9J5667gmVDKpLdixt76qCNOCW2WNYdrwCbq_3J7lin0XGuiOxUmyOEvyX0iLTKw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation</title><source>arXiv.org</source><creator>Tajima, Daichi ; Kanamori, Yoshihiro ; Endo, Yuki</creator><creatorcontrib>Tajima, Daichi ; Kanamori, Yoshihiro ; Endo, Yuki</creatorcontrib><description>The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.</description><identifier>DOI: 10.48550/arxiv.2110.07272</identifier><language>eng</language><subject>Computer Science - Graphics</subject><creationdate>2021-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2110.07272$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2110.07272$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tajima, Daichi</creatorcontrib><creatorcontrib>Kanamori, Yoshihiro</creatorcontrib><creatorcontrib>Endo, Yuki</creatorcontrib><title>Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation</title><description>The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.</description><subject>Computer Science - Graphics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNj8tKw0AYhWfjQqoP4Mp5gdS5ZS7uarVWqAilUHAT_mRmmoHJpKSJ2rc3bV24OnA434EPoTtKpkLnOXmA7id8TRkdC6KYYtfoc-1i2NV9SDu8HBpIBxwS7muHtyHaR_zeprYaInR4McSYPbX2eNnhf-B36Gv83DYwojML-x760KYbdOUhHtztX07QZvGymS-z1cfr23y2ykAqlpWGM2GA6ZxKUdpSKMUdUOKJ9J5667gmVDKpLdixt76qCNOCW2WNYdrwCbq_3J7lin0XGuiOxUmyOEvyX0iLTKw</recordid><startdate>20211014</startdate><enddate>20211014</enddate><creator>Tajima, Daichi</creator><creator>Kanamori, Yoshihiro</creator><creator>Endo, Yuki</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211014</creationdate><title>Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation</title><author>Tajima, Daichi ; Kanamori, Yoshihiro ; Endo, Yuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-b93249a285164bdb4773ea10f06ff1fde38016268dadea1dfcc02843d7d992893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Graphics</topic><toplevel>online_resources</toplevel><creatorcontrib>Tajima, Daichi</creatorcontrib><creatorcontrib>Kanamori, Yoshihiro</creatorcontrib><creatorcontrib>Endo, Yuki</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tajima, Daichi</au><au>Kanamori, Yoshihiro</au><au>Endo, Yuki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation</atitle><date>2021-10-14</date><risdate>2021</risdate><abstract>The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.</abstract><doi>10.48550/arxiv.2110.07272</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2110.07272
ispartof
issn
language eng
recordid cdi_arxiv_primary_2110_07272
source arXiv.org
subjects Computer Science - Graphics
title Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T05%3A01%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Relighting%20Humans%20in%20the%20Wild:%20Monocular%20Full-Body%20Human%20Relighting%20with%20Domain%20Adaptation&rft.au=Tajima,%20Daichi&rft.date=2021-10-14&rft_id=info:doi/10.48550/arxiv.2110.07272&rft_dat=%3Carxiv_GOX%3E2110_07272%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true