Occupancy Map Guided Fast Video-Based Dynamic Point Cloud Coding

In video-based dynamic point cloud compression (V-PCC), 3D point clouds are projected into patches, and then the patches are padded into 2D images suitable for the video compression framework. However, the patch projection-based method produces a large number of empty pixels; the far and near compon...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2022-02, Vol.32 (2), p.813-825
Hauptverfasser: Xiong, Jian, Gao, Hao, Wang, Miaohui, Li, Hongliang, Lin, Weisi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In video-based dynamic point cloud compression (V-PCC), 3D point clouds are projected into patches, and then the patches are padded into 2D images suitable for the video compression framework. However, the patch projection-based method produces a large number of empty pixels; the far and near components are projected to generate different 2D images (video frames), respectively. As a result, the generated video is with high resolutions and double frame rates, so the V-PCC has huge computational complexity. This paper proposes an occupancy map guided fast V-PCC method. Firstly, the relationship between the prediction coding and block complexity is studied based on a local linear image gradient model. Secondly, according to the V-PCC strategies of patch projection and block generation, we investigate the differences of rate-distortion characteristics between different types of blocks, and the temporal correlations between the far and near layers. Finally, by taking advantage of the fact that occupancy maps can explicitly indicate the block types, we propose an occupancy map guided fast coding method, in which coding is performed on the different types of blocks. Experiments have tested typical dynamic point clouds, and shown that the proposed method achieves an average 43.66% time-saving at the cost of only 0.27% and 0.16% Bjontegaard Delta (BD) rate increment under the geometry Point-to-Point (D1) error and attribute Luma Peak-Signal-Noise-Ratio (PSNR), respectively.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2021.3063501