Analytically Tractable Inference in Deep Neural Networks

Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backprop...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nguyen, Luong-Ha, Goulet, James-A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Nguyen, Luong-Ha
Goulet, James-A
description Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks. In this paper, we are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures. Although TAGI's computational efficiency is still below that of deterministic approaches relying on backpropagation, it outperforms them on classification tasks and matches their performance for information maximizing generative adversarial networks while using smaller architectures trained with fewer epochs.
doi_str_mv 10.48550/arxiv.2103.05461
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_05461</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_05461</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-205f7b217f71f8872f59cf97cf2416947bcdf094ceeb154f7f6bbdea720fbe7d3</originalsourceid><addsrcrecordid>eNotj8tuwjAQAH3hUEE_oCf8A0ltx84mR0RbQELtJfdo7exKESYgkz7y9wXa09xGM0I8aZXbyjn1jOmn_8qNVkWunC31g6hWA8Zp7APGOMkmYRjRR5K7gSnREEj2g3whOst3-kwYrxi_T-lwWYgZY7zQ4z_nonl7bdbbbP-x2a1X-wxL0JlRjsEbDQyaqwoMuzpwDYGN1WVtwYeOVW0DkdfOMnDpfUcIRrEn6Iq5WP5p7-ntOfVHTFN7W2jvC8Uvk_dBaA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Analytically Tractable Inference in Deep Neural Networks</title><source>arXiv.org</source><creator>Nguyen, Luong-Ha ; Goulet, James-A</creator><creatorcontrib>Nguyen, Luong-Ha ; Goulet, James-A</creatorcontrib><description>Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks. In this paper, we are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures. Although TAGI's computational efficiency is still below that of deterministic approaches relying on backpropagation, it outperforms them on classification tasks and matches their performance for information maximizing generative adversarial networks while using smaller architectures trained with fewer epochs.</description><identifier>DOI: 10.48550/arxiv.2103.05461</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.05461$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.05461$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nguyen, Luong-Ha</creatorcontrib><creatorcontrib>Goulet, James-A</creatorcontrib><title>Analytically Tractable Inference in Deep Neural Networks</title><description>Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks. In this paper, we are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures. Although TAGI's computational efficiency is still below that of deterministic approaches relying on backpropagation, it outperforms them on classification tasks and matches their performance for information maximizing generative adversarial networks while using smaller architectures trained with fewer epochs.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAQAH3hUEE_oCf8A0ltx84mR0RbQELtJfdo7exKESYgkz7y9wXa09xGM0I8aZXbyjn1jOmn_8qNVkWunC31g6hWA8Zp7APGOMkmYRjRR5K7gSnREEj2g3whOst3-kwYrxi_T-lwWYgZY7zQ4z_nonl7bdbbbP-x2a1X-wxL0JlRjsEbDQyaqwoMuzpwDYGN1WVtwYeOVW0DkdfOMnDpfUcIRrEn6Iq5WP5p7-ntOfVHTFN7W2jvC8Uvk_dBaA</recordid><startdate>20210309</startdate><enddate>20210309</enddate><creator>Nguyen, Luong-Ha</creator><creator>Goulet, James-A</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210309</creationdate><title>Analytically Tractable Inference in Deep Neural Networks</title><author>Nguyen, Luong-Ha ; Goulet, James-A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-205f7b217f71f8872f59cf97cf2416947bcdf094ceeb154f7f6bbdea720fbe7d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Luong-Ha</creatorcontrib><creatorcontrib>Goulet, James-A</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nguyen, Luong-Ha</au><au>Goulet, James-A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Analytically Tractable Inference in Deep Neural Networks</atitle><date>2021-03-09</date><risdate>2021</risdate><abstract>Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks. In this paper, we are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures. Although TAGI's computational efficiency is still below that of deterministic approaches relying on backpropagation, it outperforms them on classification tasks and matches their performance for information maximizing generative adversarial networks while using smaller architectures trained with fewer epochs.</abstract><doi>10.48550/arxiv.2103.05461</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2103.05461
ispartof
issn
language eng
recordid cdi_arxiv_primary_2103_05461
source arXiv.org
subjects Computer Science - Learning
title Analytically Tractable Inference in Deep Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T05%3A03%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Analytically%20Tractable%20Inference%20in%20Deep%20Neural%20Networks&rft.au=Nguyen,%20Luong-Ha&rft.date=2021-03-09&rft_id=info:doi/10.48550/arxiv.2103.05461&rft_dat=%3Carxiv_GOX%3E2103_05461%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true