Self‐supervised non‐rigid structure from motion with improved training of Wasserstein GANs

This study proposes a self‐supervised method to reconstruct 3D limbic structures from 2D landmarks extracted from a single view. The loss of self‐consistency can be reduced by performing a random orthogonal projection of the reconstructed 3D structure. Thus, the training process can be self‐supervis...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET Computer Vision 2023-06, Vol.17 (4), p.404-414
Hauptverfasser: Wang, Yaming, Peng, Xiangyang, Huang, Wenqing, Ye, Xiaoping, Jiang, Mingfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This study proposes a self‐supervised method to reconstruct 3D limbic structures from 2D landmarks extracted from a single view. The loss of self‐consistency can be reduced by performing a random orthogonal projection of the reconstructed 3D structure. Thus, the training process can be self‐supervised by using geometric self‐consistency in the reconstruction–projection–reconstruction process. The self‐supervised network mainly consists of graph convolution and Transformer encoders. This network is called the SS‐Graphformer. By adding a discriminator, the SS‐Graphformer is used as a generator to form a Wasserstein Generative Adversarial Network architecture with a Gradient Penalty to improve the accuracy of the reconstruction. It is experimentally demonstrated that the addition of the 2D structure discriminator can significantly improve the accuracy of the reconstruction. We present SS‐Graphformer, a graph convolution and Transformer‐based method for 3D structure reconstruction from 2D landmarks. In addition, geometric self‐consistency is used to achieve self‐supervision; when combined with the 2D structure discriminator, the accuracy of the reconstruction can be improved. Extensive experiments show that our model achieves state‐of‐the‐art performance on two popular data sets.
ISSN:1751-9632
1751-9640
DOI:10.1049/cvi2.12175