DiffusionLight: Light Probes for Free by Painting a Chrome Ball
We present a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However, these approaches often struggle with real-wor...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a simple yet effective technique to estimate lighting in a single
input image. Current techniques rely heavily on HDR panorama datasets to train
neural networks to regress an input with limited field-of-view to a full
environment map. However, these approaches often struggle with real-world,
uncontrolled settings due to the limited diversity and size of their datasets.
To address this problem, we leverage diffusion models trained on billions of
standard images to render a chrome ball into the input image. Despite its
simplicity, this task remains challenging: the diffusion models often insert
incorrect or inconsistent objects and cannot readily generate images in HDR
format. Our research uncovers a surprising relationship between the appearance
of chrome balls and the initial diffusion noise map, which we utilize to
consistently generate high-quality chrome balls. We further fine-tune an LDR
diffusion model (Stable Diffusion XL) with LoRA, enabling it to perform
exposure bracketing for HDR light estimation. Our method produces convincing
light estimates across diverse settings and demonstrates superior
generalization to in-the-wild scenarios. |
---|---|
DOI: | 10.48550/arxiv.2312.09168 |