Analytically Tractable Inference in Deep Neural Networks
Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backprop...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since its inception, deep learning has been overwhelmingly reliant on
backpropagation and gradient-based optimization algorithms in order to learn
weight and bias parameter values. Tractable Approximate Gaussian Inference
(TAGI) algorithm was shown to be a viable and scalable alternative to
backpropagation for shallow fully-connected neural networks. In this paper, we
are demonstrating how TAGI matches or exceeds the performance of
backpropagation, for training classic deep neural network architectures.
Although TAGI's computational efficiency is still below that of deterministic
approaches relying on backpropagation, it outperforms them on classification
tasks and matches their performance for information maximizing generative
adversarial networks while using smaller architectures trained with fewer
epochs. |
---|---|
DOI: | 10.48550/arxiv.2103.05461 |