GPVC: Graphics Pipeline-Based Visibility Classification for Texture Reconstruction

The shadow-mapping and ray-tracing algorithms are the two popular approaches used in visibility handling for multi-view based texture reconstruction. Visibility testing based on the two algorithms needs a user-defined bias to reduce computation error. However, a constant bias does not work for every...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Remote sensing (Basel, Switzerland) Switzerland), 2018-11, Vol.10 (11), p.1725
Hauptverfasser: Huang, Xiangxiang, Zhu, Quansheng, Jiang, Wanshou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The shadow-mapping and ray-tracing algorithms are the two popular approaches used in visibility handling for multi-view based texture reconstruction. Visibility testing based on the two algorithms needs a user-defined bias to reduce computation error. However, a constant bias does not work for every part of a geometry. Therefore, the accuracy of the two algorithms is limited. In this paper, we propose a high-precision graphics pipeline-based visibility classification (GPVC) method without introducing a bias. The method consists of two stages. In the first stage, a shader-based rendering is designed in the fixed graphics pipeline to generate initial visibility maps (IVMs). In the second stage, two algorithms, namely, lazy-projection coverage correction (LPCC) and hierarchical iterative vertex-edge-region sampling (HIVERS), are proposed to classify visible primitives into fully visible or partially visible primitives. The proposed method can be easily implemented in the graphics pipeline to achieve parallel acceleration. With respect to efficiency, the proposed method outperforms the bias-based methods. With respect to accuracy, the proposed method can theoretically reach a value of 100%. Compared with available libraries and software, the textured model based on our method is smoother with less distortion and dislocation.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs10111725