Multi-scale ResNet and BiGRU automatic sleep staging based on attention mechanism
Sleep staging is the basis of sleep evaluation and a key step in the diagnosis of sleep-related diseases. Despite being useful, the existing sleep staging methods have several disadvantages, such as relying on artificial feature extraction, failing to recognize temporal sequence patterns in the long...
Gespeichert in:
Veröffentlicht in: | PloS one 2022-06, Vol.17 (6), p.e0269500-e0269500 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sleep staging is the basis of sleep evaluation and a key step in the diagnosis of sleep-related diseases. Despite being useful, the existing sleep staging methods have several disadvantages, such as relying on artificial feature extraction, failing to recognize temporal sequence patterns in the long-term associated data, and reaching the accuracy upper limit of sleep staging. Hence, this paper proposes an automatic Electroencephalogram (EEG) sleep signal staging model, which based on Multi-scale Attention Residual Nets (MAResnet) and Bidirectional Gated Recurrent Unit (BiGRU). The proposed model is based on the residual neural network in deep learning. Compared with the traditional residual learning module, the proposed model additionally uses the improved channel and spatial feature attention units and convolution kernels of different sizes in parallel at the same position. Thus, multiscale feature extraction of the EEG sleep signals and residual learning of the neural networks is performed to avoid network degradation. Finally, BiGRU is used to determine the dependence between the sleep stages and to realize the automatic learning of sleep data staging features and sleep cycle extraction. According to the experiment, the classification accuracy and kappa coefficient of the proposed method on sleep-EDF data set are 84.24% and 0.78, which are respectively 0.24% and 0.21 higher than the traditional residual net. At the same time, this paper also verified the proposed method on UCD and SHHS data sets, and the figure of classification accuracy is 79.34% and 81.6%, respectively. Compared to related existing studies, the recognition accuracy is significantly improved, which validates the effectiveness and generalization performance of the proposed method. |
---|---|
ISSN: | 1932-6203 1932-6203 |
DOI: | 10.1371/journal.pone.0269500 |