On the Convergence Rate of Gaussianization with Random Rotations
Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required lay...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Gaussianization is a simple generative model that can be trained without
backpropagation. It has shown compelling performance on low dimensional data.
As the dimension increases, however, it has been observed that the convergence
speed slows down. We show analytically that the number of required layers
scales linearly with the dimension for Gaussian input. We argue that this is
because the model is unable to capture dependencies between dimensions.
Empirically, we find the same linear increase in cost for arbitrary input
$p(x)$, but observe favorable scaling for some distributions. We explore
potential speed-ups and formulate challenges for further research. |
---|---|
DOI: | 10.48550/arxiv.2306.13520 |