From Beginning to BEGANing: Role of Adversarial Learning in Reshaping Generative Models

Deep generative models, such as deep Boltzmann machines, focused on models that provided parametric specification of probability distribution functions. Such models are trained by maximizing intractable likelihood functions, and therefore require numerous approximations to the likelihood gradient. T...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2023-01, Vol.12 (1), p.155
Hauptverfasser: Bhandari, Aradhita, Tripathy, Balakrushna, Adate, Amit, Saxena, Rishabh, Gadekallu, Thippa Reddy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep generative models, such as deep Boltzmann machines, focused on models that provided parametric specification of probability distribution functions. Such models are trained by maximizing intractable likelihood functions, and therefore require numerous approximations to the likelihood gradient. This underlying difficulty led to the development of generative machines such as generative stochastic networks, which do not represent the likelihood functions explicitly, like the earlier models, but are trained with exact backpropagation rather than the numerous approximations. These models use piecewise linear units that are having well behaved gradients. Generative machines were further extended with the introduction of an associative adversarial network leading to the generative adversarial nets (GANs) model by Goodfellow in 2014. The estimations in GANs process two multilayer perceptrons, called the generative model and the discriminative model. These are learned jointly by alternating the training of the two models, using game theory principles. However, GAN has many difficulties, including: the difficulty of training the models; criticality in the selection of hyper-parameters; difficulty in the control of generated samples; balancing the convergence of the discriminator and generator; and the problem of modal collapse. Since its inception, efforts have been made to tackle these issues one at a time or in multiples at several stages by many researchers. However, most of these have been handled efficiently in the boundary equilibrium generative adversarial networks (BEGAN) model introduced by Berthelot et al. in 2017. In this work we presented the advent of adversarial networks, starting with the history behind the models and c developments done on GANs until the BEGAN model was introduced. Since some time has elapsed since the proposal of BEGAN, we provided an up-to-date study, as well as future directions for various aspects of adversarial learning.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics12010155