AEDmts: An Attention-Based Encoder-Decoder Framework for Multi-Sensory Time Series Analytic
Numerous IoT applications have emerged in human healthcare area with advances in wearable electronics. Multiple physical and physiological data bearing strong spatial-temporal characteristic collected by the wearable sensors is sent to the smartphones where it is aggregated and transferred to back-e...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.37406-37415 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Numerous IoT applications have emerged in human healthcare area with advances in wearable electronics. Multiple physical and physiological data bearing strong spatial-temporal characteristic collected by the wearable sensors is sent to the smartphones where it is aggregated and transferred to back-end applications for further processing. Analyzing the multivariate time series is of great importance yet very challenging as it is affected by many complex factors, i.e., dynamic spatial-temporal correlations and external factors. In this paper, we propose an attention-based encoder-decoder framework for multi-sensory time-series analytic. It consists of four parts: data collection, data mining, time-series analytic part and user interaction. A temporal-attention based encoder-decoder model is proposed to make a long-term prediction of multiple time series to realize the real-time user interaction. The proposed model uses the LSTM model to learn the long-term dependence of the time series related to certain motion sequence. The attention mechanism connects the encoder and the decoder to make long-term predictions for future time series. Through extensive experiments, the proposed model has achieved better results in short-term and long-term predictions compared with the state of art methods. An activity recognition algorithm based on LSTM is also proposed in this framework to identify daily human activities and sports activities accurately. Through five-fold and ten-fold cross-validation strategies and comparison with six baseline machine learning models, the activity recognition algorithm has a recognition rate of 98.89% and 99.28% for human activity. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.2971579 |