Twinenet: coupling features for synthesizing volume rendered images via convolutional encoder–decoders and multilayer perceptrons

Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the produc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Visual computer 2024-10, Vol.40 (10), p.7201-7220
Hauptverfasser: Luo, Shengzhou, Xu, Jingxing, Dingliana, John, Wei, Mingqiang, Han, Lu, He, Lewei, Pan, Jiahui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the production of high-quality volume rendered images. In this paper, we propose TwineNet, a neural network architecture specifically designed for volume rendering. TwineNet combines features extracted from volume data, transfer functions, and viewpoints by utilizing twining skip connections across multiple feature layers. Building upon the TwineNet architecture, we introduce two neural networks, VolTFNet and PosTFNet, which leverage convolutional encoder–decoders and multilayer perceptrons to synthesize volume rendered images with novel transfer functions and viewpoints. Our experimental findings demonstrate the superiority of our models compared to state-of-the-art approaches in generating high-quality volume rendered images with novel transfer functions and viewpoints. This research contributes to advancing the field of volume rendering and showcases the potential of neural rendering techniques in scientific visualization.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-024-03368-5