A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data

Human activity recognition (HAR) has become a significant area of research in human behavior analysis, human–computer interaction, and pervasive computing. Recently, deep learning (DL)-based methods have been applied successfully to time-series data generated from smartphones and wearable sensors to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Visual computer 2022-12, Vol.38 (12), p.4095-4109
Hauptverfasser: Challa, Sravan Kumar, Kumar, Akhilesh, Semwal, Vijay Bhaskar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Human activity recognition (HAR) has become a significant area of research in human behavior analysis, human–computer interaction, and pervasive computing. Recently, deep learning (DL)-based methods have been applied successfully to time-series data generated from smartphones and wearable sensors to predict various activities of humans. Even though DL-based approaches performed very well in activity recognition, they are still facing challenges in handling time series data. Several issues persist with time-series data, such as difficulties in feature extraction, heavily biased data, etc. Moreover, most of the HAR approaches rely on manual feature engineering. In this paper, to design a robust classification model for HAR using wearable sensor data, a hybrid of convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) is used. The proposed multibranch CNN-BiLSTM network does automatic feature extraction from the raw sensor data with minimal data pre-processing. The use of CNN and BiLSTM makes the model capable of learning local features as well as long-term dependencies in sequential data. The different filter sizes used in the proposed model can capture various temporal local dependencies and thus helps to improve the feature extraction process. To evaluate the model performance, three benchmark datasets, i.e., WISDM, UCI-HAR, and PAMAP2, are utilized. The proposed model has achieved 96.05%, 96.37%, and 94.29% accuracies on WISDM, UCI-HAR, and PAMAP2 datasets, respectively. The obtained experimental results demonstrate that the proposed model outperforms the other compared approaches.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-021-02283-3