A quantized-diffusion model for rendering translucent materials

We present a new BSSRDF for rendering images of translucent materials. Previous diffusion BSSRDFs are limited by the accuracy of classical diffusion theory. We introduce a modified diffusion theory that is more accurate for highly absorbing materials and near the point of illumination. The new diffu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM SIGGRAPH 2011 papers 2011-07, Vol.30 (4), p.1-14
Hauptverfasser: D'Eon, Eugene, Irving, Geoffrey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a new BSSRDF for rendering images of translucent materials. Previous diffusion BSSRDFs are limited by the accuracy of classical diffusion theory. We introduce a modified diffusion theory that is more accurate for highly absorbing materials and near the point of illumination. The new diffusion solution accurately decouples single and multiple scattering. We then derive a novel, analytic, extended-source solution to the multilayer search-light problem by quantizing the diffusion Green's function. This allows the application of the diffusion multipole model to material layers several orders of magnitude thinner than previously possible and creates accurate results under high-frequency illumination. Quantized diffusion provides both a new physical foundation and a variable-accuracy construction method for sum-of-Gaussians BSSRDFs, which have many useful properties for efficient rendering and appearance capture. Our BSSRDF maps directly to previous real-time rendering algorithms. For film production rendering, we propose several improvements to previous hierarchical point cloud algorithms by introducing a new radial-binning data structure and a doubly-adaptive traversal strategy.
ISSN:0730-0301
DOI:10.1145/1964921.1964951