Three-dimensional shape generation via variational autoencoder generative adversarial network with signed distance function

Mesh-based 3-dimensional (3D) shape generation from a 2-dimensional (2D) image using a convolution neural network (CNN) framework is an open problem in the computer graphics and vision domains. Most existing CNN-based frameworks lack robust algorithms that can scale well without combining different...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of electrical and computer engineering (Malacca, Malacca) Malacca), 2023-08, Vol.13 (4), p.4009
Hauptverfasser: Ajayi, Ebenezer Akinyemi, Lim, Kian Ming, Chong, Siew-Chin, Lee, Chin Poo
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mesh-based 3-dimensional (3D) shape generation from a 2-dimensional (2D) image using a convolution neural network (CNN) framework is an open problem in the computer graphics and vision domains. Most existing CNN-based frameworks lack robust algorithms that can scale well without combining different shape parts. Also, most CNN-based algorithms lack suitable 3D data representations that can fit into CNN without modification(s) to produce high-quality 3D shapes. This paper presents an approach that integrates a variational autoencoder (VAE) and a generative adversarial network (GAN) called 3 dimensional variational autoencoder signed distance function generative adversarial network (3D-VAE-SDFGAN) to create a 3D shape from a 2D image that considerably improves scalability and visual quality. The proposed method only feeds a single 2D image into the network to produce a mesh-based 3D shape. The network encodes a 2D image of the 3D object into the latent representations, and implicit surface representations of 3D objects corresponding to those 2D images are subsequently generated. Hence, a signed distance function (SDF) is proposed to maintain object inside-outside information in the implicit surface representation. Polygon mesh surfaces are then produced using the marching cubes algorithm. The ShapeNet dataset was used in the experiments to evaluate the proposed 3D-VAE-SDFGAN. The experimental results show that 3D-VAE-SDFGAN outperforms other state-of-the-art models.
ISSN:2088-8708
2722-2578
DOI:10.11591/ijece.v13i4.pp4009-4019