Centaur: Robust Multimodal Fusion for Human Activity Recognition
The proliferation of Internet of Things (IoT) and mobile devices equipped with heterogeneous sensors has enabled new applications that rely on the fusion of time series emitted by sensors with different modalities. While there are promising neural network architectures for multimodal fusion, their p...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2024, Vol.24 (11), p.18578-18591 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The proliferation of Internet of Things (IoT) and mobile devices equipped with heterogeneous sensors has enabled new applications that rely on the fusion of time series emitted by sensors with different modalities. While there are promising neural network architectures for multimodal fusion, their performance falls apart quickly in the presence of consecutive missing data and noise across multiple modalities/sensors, the issues that are prevalent in real-world settings. We propose Centaur, a multimodal fusion model for human activity recognition (HAR) that is robust to these data quality issues. Centaur combines a data cleaning module, which is a denoising autoencoder (DAE) with convolutional layers, and a multimodal fusion module, which is a deep convolutional neural network with the self-attention (SA) mechanism to capture cross-sensor (CS) correlation. We train Centaur using a stochastic data corruption scheme and evaluate it on five datasets that contain data generated by multiple inertial measurement units (IMUs). We show that Centaur's data cleaning module outperforms two state-of-the-art autoencoder-based architectures, and its multimodal fusion module outperforms four strong baselines. Compared to two robust fusion architectures from the related work, Centaur is more robust especially to consecutive missing data that occur in multiple sensor channels, achieving 10.89%-16.56% higher accuracy in the HAR task. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2024.3388893 |