A simple theory for training response of deep neural networks
Deep neural networks give us a powerful method to model the training dataset's relationship between input and output. We can regard that as a complex adaptive system consisting of many artificial neurons that work as an adaptive memory as a whole. The network's behavior is training dynamic...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks give us a powerful method to model the training
dataset's relationship between input and output. We can regard that as a
complex adaptive system consisting of many artificial neurons that work as an
adaptive memory as a whole. The network's behavior is training dynamics with a
feedback loop from the evaluation of the loss function. We already know the
training response can be constant or shows power law-like aging in some ideal
situations. However, we still have gaps between those findings and other
complex phenomena, like network fragility. To fill the gap, we introduce a very
simple network and analyze it. We show the training response consists of some
different factors based on training stages, activation functions, or training
methods. In addition, we show feature space reduction as an effect of
stochastic training dynamics, which can result in network fragility. Finally,
we discuss some complex phenomena of deep networks. |
---|---|
DOI: | 10.48550/arxiv.2405.04074 |