An optimized method for variational autoencoders based on Gaussian cloud model
Variational Autoencoders is one of the most valuable generative models in the field of unsupervised learning. Due to its own construction characteristics, Variational Autoencoders has insufficient precision for high-resolution image reconstruction. In this paper, the priori variant model of Variatio...
Gespeichert in:
Veröffentlicht in: | Information sciences 2023-10, Vol.645, p.119358, Article 119358 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Variational Autoencoders is one of the most valuable generative models in the field of unsupervised learning. Due to its own construction characteristics, Variational Autoencoders has insufficient precision for high-resolution image reconstruction. In this paper, the priori variant model of Variational Autoencoders based on the Gaussian Cloud Model is proposed to optimize the sampling method of latent variables, network structure and loss function. First, the Gaussian Cloud Model is used to replace the prior distribution of Variational Autoencoders. Second, the sampling process is changed into two consecutive Gaussian distributions. Finally, a new loss function based on the envelope curve of the Gaussian Cloud Model is presented for approximating the real data distribution. The method is evaluated qualitatively and quantitatively on several datasets to fully demonstrate the correctness and effectiveness of the method.
•GCMVAE add representation learning for reconstructed data.•The Gaussian cloud can be understood as two consecutive Gaussian distributions.•GCMVAE increasing the probability of detailing the latent variables in the picking.•The data generated by GCMVAE is smoother and more continuous. |
---|---|
ISSN: | 0020-0255 |
DOI: | 10.1016/j.ins.2023.119358 |