3D SOC-Net: Deep 3D reconstruction network based on self-organizing clustering mapping

Image-based 3D reconstruction from a single-view image is critical and fundamental in many areas and can be integrated into many applications to provide useful functions. However, there are several crucial difficulties and challenges in accomplishing this process. For example, the issues of self-occ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2023-03, Vol.213, p.119209, Article 119209
Hauptverfasser: Gan, Y.S., Chen, Weihao, Yau, Wei-Chuen, Zou, Ziyun, Liong, Sze-Teng, Wang, Shih-Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Image-based 3D reconstruction from a single-view image is critical and fundamental in many areas and can be integrated into many applications to provide useful functions. However, there are several crucial difficulties and challenges in accomplishing this process. For example, the issues of self-occlusion and lack of object information in a different perspective of viewing. Thus, the quality of the generated 3D shape from a single-view image may not be satisfactory and robust, hence affecting its feasibility in further applications. Conventionally, the 3D reconstruction process requires multiple input images such that the context of the target object can be fully conveyed. In this paper, we propose a new and simple, yet powerful framework that improves the quality of the generated point cloud from a single-view image. Concretely, the significant representatives are first discovered and selected by adopting a network architecture that contains both encoder and decoder models. Finally, the resultant point clouds are obtained by extracting the mean shape using the methods of Chamfer Distance (CD), Earth Mover’s Distance (EMD), and Self-Organizing Map (SOM). As a result, the proposed algorithm is capable to demonstrate its robustness and effectiveness when compared to state-of-the-art 3D reconstruction methods. The best mean loss exhibited is 4.45 when evaluated on 12 classes in the ShapeNetCoreV1 dataset. In addition, qualitative results are presented to further verify the reliability of the proposed method. •Proposal of single RGB image as the input for 3D point cloud reconstruction.•The utilization of the mean shape extractor to improve the quality of 3D modeling.•Three different mean shape extraction methods are employed for framework validation.•The effectiveness of each mean shape is evaluated qualitatively and quantitatively.•Compelling CD and EMD results are achieved when tested in the ShapeNetCore dataset.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2022.119209