Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN

A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical & biological engineering & computing 2021-10, Vol.59 (10), p.2037-2050
Hauptverfasser: Li, Ming-ai, Han, Jian-fu, Yang, Jin-fu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time-frequency-space information in MI-EEG images and exploring the feature fusion mechanism in the CNN. However, information is excessively compressed in the present MI-EEG image, and the sequential CNN is unfavorable for the comprehensive utilization of local features. In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm. The time-frequency matrix of each electrode is generated via continuous wavelet transform (WT), and the relevant section of frequency is extracted and divided into nine submatrices, the longitudinal sums and lengths of which are calculated along the directions of frequency and time successively to produce a 3 × 3 feature matrix for each electrode. Then, feature matrix of each electrode is interpolated to coincide with their corresponding coordinates, thereby yielding a WT-based multidimensional image, called WTMI. Meanwhile, a multilevel and multiscale feature fusion convolutional neural network (MLMSFFCNN) is designed for WTMI, which has dense information, low signal-to-noise ratio, and strong spatial distribution. Extensive experiments are conducted on the BCI Competition IV 2a and 2b datasets, and accuracies of 92.95% and 97.03% are yielded based on 10-fold cross-validation, respectively, which exceed those of the state-of-the-art imaging methods. The kappa values and p values demonstrate that our method has lower class skew and error costs. The experimental results demonstrate that WTMI can fully represent the time-frequency-space features of MI-EEG and that MLMSFFCNN is beneficial for improving the collection of multiscale features and the fusion recognition of general and abstract features for WTMI. Graphical abstract
ISSN:0140-0118
1741-0444
DOI:10.1007/s11517-021-02396-w