Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST) Activation Under Data Constraints
Activation functions enable neural networks to learn complex representations by introducing non-linearities. While feedforward models commonly use rectified linear units, sequential models like recurrent neural networks, long short-term memory (LSTMs) and gated recurrent units (GRUs) still rely on S...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Activation functions enable neural networks to learn complex representations
by introducing non-linearities. While feedforward models commonly use rectified
linear units, sequential models like recurrent neural networks, long short-term
memory (LSTMs) and gated recurrent units (GRUs) still rely on Sigmoid and TanH
activation functions. However, these classical activation functions often
struggle to model sparse patterns when trained on small sequential datasets to
effectively capture temporal dependencies. To address this limitation, we
propose squared Sigmoid TanH (SST) activation specifically tailored to enhance
the learning capability of sequential models under data constraints. SST
applies mathematical squaring to amplify differences between strong and weak
activations as signals propagate over time, facilitating improved gradient flow
and information filtering. We evaluate SST-powered LSTMs and GRUs for diverse
applications, such as sign language recognition, regression, and time-series
classification tasks, where the dataset is limited. Our experiments demonstrate
that SST models consistently outperform RNN-based models with baseline
activations, exhibiting improved test accuracy. |
---|---|
DOI: | 10.48550/arxiv.2402.09034 |