Interpretable Human Activity Recognition With Temporal Convolutional Networks and Model-Agnostic Explanations

This research advances the field of human activity recognition (HAR) by developing a robust and interpretable deep learning model using wearable sensor data. We address seven discrete activities through a multimodal fusion architecture that synergistically combines temporal convolutional networks (T...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2024-09, Vol.24 (17), p.27607-27617
Hauptverfasser: Bijalwan, Vishwanath, Manan Khan, Abdul, Baek, Hangyeol, Jeon, Sangmin, Kim, Youngshik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This research advances the field of human activity recognition (HAR) by developing a robust and interpretable deep learning model using wearable sensor data. We address seven discrete activities through a multimodal fusion architecture that synergistically combines temporal convolutional networks (TCNs), convolutional neural networks (CNNs), and long short-term memory (LSTM). Each network type caters to its strength: TCNs for temporal dependencies, CNNs for local features, and LSTMs for sequential information. A dedicated fusion layer seamlessly integrates these features, achieving a remarkable mean accuracy of 98.7% on challenging data. Finally, fivefold cross-validation is done to validate our results. We find a mean accuracy of 98.7% and a standard deviation of 0.003. In addition, we use local interpretable model-agnostic explanations (LIMs) and Shapley additive explanations (SHAP) to offer insights into the model's decision-making process, thereby improving its transparency and fostering confidence. This study contributes by providing robust and interpretable deep learning models that can be used in various applications.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2024.3418496