Minimum error entropy criterion‐based randomised autoencoder

The extreme learning machine‐based autoencoder (ELM‐AE) has attracted a lot of attention due to its fast learning speed and promising representation capability. However, the existing ELM‐AE algorithms only reconstruct the original input and generally ignore the probability distribution of the data....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive Computation and Systems 2021-12, Vol.3 (4), p.332-341
Hauptverfasser: Ma, Rongzhi, Wang, Tianlei, Cao, Jiuwen, Dong, Fang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The extreme learning machine‐based autoencoder (ELM‐AE) has attracted a lot of attention due to its fast learning speed and promising representation capability. However, the existing ELM‐AE algorithms only reconstruct the original input and generally ignore the probability distribution of the data. The minimum error entropy (MEE), as an optimal criterion considering the distribution statistics of the data, is robust in handling non‐linear systems and non‐Gaussian noises. The MEE is equivalent to the minimisation of the Kullback–Leibaler divergence. Inspired by these advantages, a novel randomised AE is proposed by adopting the MEE criterion as the loss function in the ELM‐AE (in short, the MEE‐RAE) in this study. Instead of solving the output weight by the Moore–Penrose generalised inverse, the optimal output weight is obtained by the fixed‐point iteration method. Further, a quantised MEE (QMEE) is applied to reduce the computational complexity of. Simulations have shown that the QMEE‐RAE not only achieves superior generalisation performance but is also more robust to non‐Gaussian noises than the ELM‐AE.
ISSN:2517-7567
1873-9601
2517-7567
1873-961X
DOI:10.1049/ccs2.12030