Hybrid Loss for Learning Single-Image-based HDR Reconstruction
This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper tackles high-dynamic-range (HDR) image reconstruction given only a
single low-dynamic-range (LDR) image as input. While the existing methods focus
on minimizing the mean-squared-error (MSE) between the target and reconstructed
images, we minimize a hybrid loss that consists of perceptual and adversarial
losses in addition to HDR-reconstruction loss. The reconstruction loss instead
of MSE is more suitable for HDR since it puts more weight on both over- and
under- exposed areas. It makes the reconstruction faithful to the input.
Perceptual loss enables the networks to utilize knowledge about objects and
image structure for recovering the intensity gradients of saturated and grossly
quantized areas. Adversarial loss helps to select the most plausible appearance
from multiple solutions. The hybrid loss that combines all the three losses is
calculated in logarithmic space of image intensity so that the outputs retain a
large dynamic range and meanwhile the learning becomes tractable. Comparative
experiments conducted with other state-of-the-art methods demonstrated that our
method produces a leap in image quality. |
---|---|
DOI: | 10.48550/arxiv.1812.07134 |