Learning in Deep Factor Graphs with Gaussian Belief Propagation
We propose an approach to do learning in Gaussian factor graphs. We treat all relevant quantities (inputs, outputs, parameters, latents) as random variables in a graphical model, and view both training and prediction as inference problems with different observed nodes. Our experiments show that thes...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose an approach to do learning in Gaussian factor graphs. We treat all
relevant quantities (inputs, outputs, parameters, latents) as random variables
in a graphical model, and view both training and prediction as inference
problems with different observed nodes. Our experiments show that these
problems can be efficiently solved with belief propagation (BP), whose updates
are inherently local, presenting exciting opportunities for distributed and
asynchronous training. Our approach can be scaled to deep networks and provides
a natural means to do continual learning: use the BP-estimated parameter
marginals of the current task as parameter priors for the next. On a video
denoising task we demonstrate the benefit of learnable parameters over a
classical factor graph approach and we show encouraging performance of deep
factor graphs for continual image classification. |
---|---|
DOI: | 10.48550/arxiv.2311.14649 |