Storage and Learning phase transitions in the Random-Features Hopfield Model
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities. Inspired by the manifold hypothesis in machine learning, we propose and investigate a generalization of the standard setti...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Hopfield model is a paradigmatic model of neural networks that has been
analyzed for many decades in the statistical physics, neuroscience, and machine
learning communities. Inspired by the manifold hypothesis in machine learning,
we propose and investigate a generalization of the standard setting that we
name Random-Features Hopfield Model. Here $P$ binary patterns of length $N$ are
generated by applying to Gaussian vectors sampled in a latent space of
dimension $D$ a random projection followed by a non-linearity. Using the
replica method from statistical physics, we derive the phase diagram of the
model in the limit $P,N,D\to\infty$ with fixed ratios $\alpha=P/N$ and
$\alpha_D=D/N$. Besides the usual retrieval phase, where the patterns can be
dynamically recovered from some initial corruption, we uncover a new phase
where the features characterizing the projection can be recovered instead. We
call this phenomena the learning phase transition, as the features are not
explicitly given to the model but rather are inferred from the patterns in an
unsupervised fashion. |
---|---|
DOI: | 10.48550/arxiv.2303.16880 |