Utilizing deep learning models in CSI-based human activity recognition

In recent years, channel state information (CSI) in WiFi 802.11n has been increasingly used to collect data pertaining to human activity. Such raw data are then used to enhance human activity recognition. Activities such as lying down, falling, walking, running, sitting down, and standing up can now...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2022-04, Vol.34 (8), p.5993-6010
Hauptverfasser: Shalaby, Eman, ElShennawy, Nada, Sarhan, Amany
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, channel state information (CSI) in WiFi 802.11n has been increasingly used to collect data pertaining to human activity. Such raw data are then used to enhance human activity recognition. Activities such as lying down, falling, walking, running, sitting down, and standing up can now be detected with the use of information collected through CSI. Human activity recognition has a multitude of applications, such as home monitoring of patients. Four deep learning models are presented in this paper, namely: a convolution neural network (CNN) with a Gated Recurrent Unit (GRU); a CNN with a GRU and attention; a CNN with a GRU and a second CNN, and a CNN with Long Short-Term Memory (LSTM) and a second CNN. Those models were trained to perform Human Activity Recognition (HAR) using CSI amplitude data collected by a CSI tool. Experiments conducted to test the efficacy of these models showed superior results compared with other recent approaches. This enhanced performance of our models may be attributable the ability of our models to make full use of available data and to extract all data features, including high dimensionality and time sequence. The highest average recognition accuracy reached by the proposed models was achieved by the CNN-GRU, and the CNN-GRU with attention models, standing at 99.31% and 99.16%, respectively. In addition, the performance of the models was evaluated for unseen CSI data by training our models using a random split-of-dataset method (70% training and 30% testing). Our models achieved impressive results with accuracies reaching 100% for nearly all activities. For the lying down activity, accuracy obtained from the CNN-GRU model stood at 99.46%; slightly higher than the 99.05% achieved by the CNN-GRU with attention model. This confirmed the robustness of our models against environmental changes.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-021-06787-w