Two-stage Human Activity Recognition on Microcontrollers with Decision Trees and CNNs
Human Activity Recognition (HAR) has become an increasingly popular task for embedded devices such as smartwatches. Most HAR systems for ultra-low power devices are based on classic Machine Learning (ML) models, whereas Deep Learning (DL), although reaching state-of-the-art accuracy, is less popular...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human Activity Recognition (HAR) has become an increasingly popular task for
embedded devices such as smartwatches. Most HAR systems for ultra-low power
devices are based on classic Machine Learning (ML) models, whereas Deep
Learning (DL), although reaching state-of-the-art accuracy, is less popular due
to its high energy consumption, which poses a significant challenge for
battery-operated and resource-constrained devices. In this work, we bridge the
gap between on-device HAR and DL thanks to a hierarchical architecture composed
of a decision tree (DT) and a one dimensional Convolutional Neural Network (1D
CNN). The two classifiers operate in a cascaded fashion on two different
sub-tasks: the DT classifies only the easiest activities, while the CNN deals
with more complex ones. With experiments on a state-of-the-art dataset and
targeting a single-core RISC-V MCU, we show that this approach allows to save
up to 67.7% energy w.r.t. a "stand-alone" DL architecture at iso-accuracy.
Additionally, the two-stage system either introduces a negligible memory
overhead (up to 200 B) or on the contrary, reduces the total memory occupation. |
---|---|
DOI: | 10.48550/arxiv.2206.07652 |