ResNet Autoencoders for Unsupervised Feature Learning From High-Dimensional Data: Deep Models Resistant to Performance Degradation

Efficient modeling of high-dimensional data requires extracting only relevant dimensions through feature learning. Unsupervised feature learning has gained tremendous attention due to its unbiased approach, no need for prior knowledge or expensive manual processing, and ability to handle exponential...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.40511-40520
Hauptverfasser: Wickramasinghe, Chathurika S., Marino, Daniel L., Manic, Milos
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Efficient modeling of high-dimensional data requires extracting only relevant dimensions through feature learning. Unsupervised feature learning has gained tremendous attention due to its unbiased approach, no need for prior knowledge or expensive manual processing, and ability to handle exponential data growth. Deep Autoencoder (AE) is a state-of-the-art deep neural network for unsupervised feature learning, which learns embedded-representations using a series of stacked layers. However, as the AE network gets deeper, these learned embedded-representations can deteriorate due to vanishing gradient, leading to performance degradation. This article presents ResNet Autoencoder (RAE) and its convolutional version (C-RAE) for unsupervised feature learning. The advantage of RAE and C-RAE is that it enables the user to add residual connections for increased network capacity without incurring the cost of degradation for unsupervised feature learning compared to standard AEs. While RAE and C-RAE inherit all the advantages of AEs, such as automated non-linear feature extraction and unsupervised learning, they also allow users to design larger networks without adverse effects on feature learning performance. We performed classification on learned embedded-representation to evaluate RAE and C-RAE. RAE and C-RAE were compared against AEs on MNIST, Fashion MNIST, and CIFAR10 datasets. When increasing the number of layers, C-RAE outperformed AE by showing significantly lower performance degradation of classification accuracy (less than 3%) compared to AE (33% to 65%). Further, C-RAE exhibited higher mean accuracy and lower variance of accuracy than standard AE. When comparing RAE and C-RAE with widely used feature learning methods (Convolutional AE, PCA, ICA, LLE, Factor Analysis, and SVD), C-RAE showed the highest accuracy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3064819