Deep Gated Networks: A framework to understand training and generalisation in deep learning
Understanding the role of (stochastic) gradient descent (SGD) in the training and generalisation of deep neural networks (DNNs) with ReLU activation has been the object study in the recent past. In this paper, we make use of deep gated networks (DGNs) as a framework to obtain insights about DNNs wit...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Lakshminarayanan, Chandrashekar Singh, Amit Vikram |
description | Understanding the role of (stochastic) gradient descent (SGD) in the training
and generalisation of deep neural networks (DNNs) with ReLU activation has been
the object study in the recent past. In this paper, we make use of deep gated
networks (DGNs) as a framework to obtain insights about DNNs with ReLU
activation. In DGNs, a single neuronal unit has two components namely the
pre-activation input (equal to the inner product the weights of the layer and
the previous layer outputs), and a gating value which belongs to $[0,1]$ and
the output of the neuronal unit is equal to the multiplication of
pre-activation input and the gating value. The standard DNN with ReLU
activation, is a special case of the DGNs, wherein the gating value is $1/0$
based on whether or not the pre-activation input is positive or negative. We
theoretically analyse and experiment with several variants of DGNs, each
variant suited to understand a particular aspect of either training or
generalisation in DNNs with ReLU activation. Our theory throws light on two
questions namely i) why increasing depth till a point helps in training and ii)
why increasing depth beyond a point hurts training? We also present
experimental evidence to show that gate adaptation, i.e., the change of gating
value through the course of training is key for generalisation. |
doi_str_mv | 10.48550/arxiv.2002.03996 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_03996</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_03996</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-767e65ec44493aa1b0eb4a8173848ce3ffd44e0422652e82848fb29440635ee63</originalsourceid><addsrcrecordid>eNotj71OwzAUhb0woMIDMOEXSHDsa8dhqwoUpAqWbgzRTX1dWaRO5Zi_twcXpqPv6OhIH2NXjajBai1uMH2Fj1oKIWuhus6cs9c7oiNfYybHnyl_TultvuVL7hMeqBDPE3-PjtKcMTqeE4YY4p4X2FOkhGOYMYcp8hC5K28jYSqbC3bmcZzp8j8XbPtwv109VpuX9dNquanQtKZqTUtG0w4AOoXYDIIGQNu0yoLdkfLeAZAAKY2WZOVv6wfZAQijNJFRC3b9d3uy648pHDB998WyP1mqH-laTQI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Gated Networks: A framework to understand training and generalisation in deep learning</title><source>arXiv.org</source><creator>Lakshminarayanan, Chandrashekar ; Singh, Amit Vikram</creator><creatorcontrib>Lakshminarayanan, Chandrashekar ; Singh, Amit Vikram</creatorcontrib><description>Understanding the role of (stochastic) gradient descent (SGD) in the training
and generalisation of deep neural networks (DNNs) with ReLU activation has been
the object study in the recent past. In this paper, we make use of deep gated
networks (DGNs) as a framework to obtain insights about DNNs with ReLU
activation. In DGNs, a single neuronal unit has two components namely the
pre-activation input (equal to the inner product the weights of the layer and
the previous layer outputs), and a gating value which belongs to $[0,1]$ and
the output of the neuronal unit is equal to the multiplication of
pre-activation input and the gating value. The standard DNN with ReLU
activation, is a special case of the DGNs, wherein the gating value is $1/0$
based on whether or not the pre-activation input is positive or negative. We
theoretically analyse and experiment with several variants of DGNs, each
variant suited to understand a particular aspect of either training or
generalisation in DNNs with ReLU activation. Our theory throws light on two
questions namely i) why increasing depth till a point helps in training and ii)
why increasing depth beyond a point hurts training? We also present
experimental evidence to show that gate adaptation, i.e., the change of gating
value through the course of training is key for generalisation.</description><identifier>DOI: 10.48550/arxiv.2002.03996</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.03996$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.03996$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lakshminarayanan, Chandrashekar</creatorcontrib><creatorcontrib>Singh, Amit Vikram</creatorcontrib><title>Deep Gated Networks: A framework to understand training and generalisation in deep learning</title><description>Understanding the role of (stochastic) gradient descent (SGD) in the training
and generalisation of deep neural networks (DNNs) with ReLU activation has been
the object study in the recent past. In this paper, we make use of deep gated
networks (DGNs) as a framework to obtain insights about DNNs with ReLU
activation. In DGNs, a single neuronal unit has two components namely the
pre-activation input (equal to the inner product the weights of the layer and
the previous layer outputs), and a gating value which belongs to $[0,1]$ and
the output of the neuronal unit is equal to the multiplication of
pre-activation input and the gating value. The standard DNN with ReLU
activation, is a special case of the DGNs, wherein the gating value is $1/0$
based on whether or not the pre-activation input is positive or negative. We
theoretically analyse and experiment with several variants of DGNs, each
variant suited to understand a particular aspect of either training or
generalisation in DNNs with ReLU activation. Our theory throws light on two
questions namely i) why increasing depth till a point helps in training and ii)
why increasing depth beyond a point hurts training? We also present
experimental evidence to show that gate adaptation, i.e., the change of gating
value through the course of training is key for generalisation.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAUhb0woMIDMOEXSHDsa8dhqwoUpAqWbgzRTX1dWaRO5Zi_twcXpqPv6OhIH2NXjajBai1uMH2Fj1oKIWuhus6cs9c7oiNfYybHnyl_TultvuVL7hMeqBDPE3-PjtKcMTqeE4YY4p4X2FOkhGOYMYcp8hC5K28jYSqbC3bmcZzp8j8XbPtwv109VpuX9dNquanQtKZqTUtG0w4AOoXYDIIGQNu0yoLdkfLeAZAAKY2WZOVv6wfZAQijNJFRC3b9d3uy648pHDB998WyP1mqH-laTQI</recordid><startdate>20200210</startdate><enddate>20200210</enddate><creator>Lakshminarayanan, Chandrashekar</creator><creator>Singh, Amit Vikram</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200210</creationdate><title>Deep Gated Networks: A framework to understand training and generalisation in deep learning</title><author>Lakshminarayanan, Chandrashekar ; Singh, Amit Vikram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-767e65ec44493aa1b0eb4a8173848ce3ffd44e0422652e82848fb29440635ee63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lakshminarayanan, Chandrashekar</creatorcontrib><creatorcontrib>Singh, Amit Vikram</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lakshminarayanan, Chandrashekar</au><au>Singh, Amit Vikram</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Gated Networks: A framework to understand training and generalisation in deep learning</atitle><date>2020-02-10</date><risdate>2020</risdate><abstract>Understanding the role of (stochastic) gradient descent (SGD) in the training
and generalisation of deep neural networks (DNNs) with ReLU activation has been
the object study in the recent past. In this paper, we make use of deep gated
networks (DGNs) as a framework to obtain insights about DNNs with ReLU
activation. In DGNs, a single neuronal unit has two components namely the
pre-activation input (equal to the inner product the weights of the layer and
the previous layer outputs), and a gating value which belongs to $[0,1]$ and
the output of the neuronal unit is equal to the multiplication of
pre-activation input and the gating value. The standard DNN with ReLU
activation, is a special case of the DGNs, wherein the gating value is $1/0$
based on whether or not the pre-activation input is positive or negative. We
theoretically analyse and experiment with several variants of DGNs, each
variant suited to understand a particular aspect of either training or
generalisation in DNNs with ReLU activation. Our theory throws light on two
questions namely i) why increasing depth till a point helps in training and ii)
why increasing depth beyond a point hurts training? We also present
experimental evidence to show that gate adaptation, i.e., the change of gating
value through the course of training is key for generalisation.</abstract><doi>10.48550/arxiv.2002.03996</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2002.03996 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2002_03996 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning Statistics - Machine Learning |
title | Deep Gated Networks: A framework to understand training and generalisation in deep learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T15%3A22%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Gated%20Networks:%20A%20framework%20to%20understand%20training%20and%20generalisation%20in%20deep%20learning&rft.au=Lakshminarayanan,%20Chandrashekar&rft.date=2020-02-10&rft_id=info:doi/10.48550/arxiv.2002.03996&rft_dat=%3Carxiv_GOX%3E2002_03996%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |