Depth of Field Aware Differentiable Rendering
Cameras with a finite aperture diameter exhibit defocus for scene elements that are not at the focus distance, and have only a limited depth of field within which objects appear acceptably sharp. In this work we address the problem of applying inverse rendering techniques to input data that exhibits...
Gespeichert in:
Veröffentlicht in: | ACM transactions on graphics 2022-12, Vol.41 (6), p.1-18, Article 190 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Cameras with a finite aperture diameter exhibit defocus for scene elements that are not at the focus distance, and have only a limited depth of field within which objects appear acceptably sharp. In this work we address the problem of applying inverse rendering techniques to input data that exhibits such defocus blurring. We present differentiable depth-of-field rendering techniques that are applicable to both rasterization-based methods using mesh representations, as well as ray-marching-based methods using either explicit [Yu et al. 2021] or implicit volumetric radiance fields [Mildenhall et al. 2020]. Our approach learns significantly sharper scene reconstructions on data containing blur due to depth of field, and recovers aperture and focus distance parameters that result in plausible forward-rendered images. We show applications to macro photography, where typical lens configurations result in a very narrow depth of field, and to multi-camera video capture, where maintaining sharp focus across a large capture volume for a moving subject is difficult. |
---|---|
ISSN: | 0730-0301 1557-7368 |
DOI: | 10.1145/3550454.3555521 |