TinyRCE: Multi Purpose Forward Learning for Resource Restricted Devices

The challenge of deploying neural network learning workloads on ultra-low power tiny devices has recently attracted several machine learning researchers of the TinyML community. A typical on-device learning session processes real-time streams of data acquired by heterogeneous sensors. In such a cont...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors letters 2023-08, p.1-4
Hauptverfasser: Pau, Danilo Pietro, Pisani, Andrea, Aymone, Fabrizio M., Ferrari, Gianluigi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The challenge of deploying neural network learning workloads on ultra-low power tiny devices has recently attracted several machine learning researchers of the TinyML community. A typical on-device learning session processes real-time streams of data acquired by heterogeneous sensors. In such a context, this paper proposes TinyRCE, a forward-only learning approach based on a hyperspherical classifier, which can be deployed on microcontrollers and potentially integrated into the sensor package. TinyRCE is fed with compact features extracted by a convolutional neural network, which can be trained with BP or it can be an extreme learning machine with randomly initialized weights. A forget mechanism has been introduced to discard useless neurons from the hidden layer, since they can become redundant over the time. TinyRCE has been evaluated with a new interleaved learning and testing data protocol to mimic a typical forward on-tiny-device workload. It has been tested with the standard MLCommons Tiny datasets used for KeyWord Spotting and Image Classification, and against the respective neural benchmarks. 95.25% average accuracy was achieved over the former classes (vs. 91.49%) and 87.17% over the latter classes (vs. 100%, caused by overfitting). In terms of complexity, TinyRCE requires 22× less MACC than SoftMax (with 36 epochs) on the former, while it requires 5× more MACC than SoftMax (with 500 epochs) for the latter. Classifier complexity and memory footprint are marginal w.r.t. the Feature Extractor, for training and inference workloads.
ISSN:2475-1472
DOI:10.1109/LSENS.2023.3307119