Diversity in a signal-to-image transformation approach for EEG-based motor imagery task classification

Nowadays, motor imagery-based brain–computer interfaces (BCIs) have been developed rapidly. In these systems, electroencephalogram (EEG) signals are recorded when a subject is involved in the imagination of doing any motor imagery movement like the imagination of the right/left hands, etc. In this p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical & biological engineering & computing 2020-02, Vol.58 (2), p.443-459
Hauptverfasser: Yilmaz, Bahar Hatipoglu, Yilmaz, Cagatay Murat, Kose, Cemal
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nowadays, motor imagery-based brain–computer interfaces (BCIs) have been developed rapidly. In these systems, electroencephalogram (EEG) signals are recorded when a subject is involved in the imagination of doing any motor imagery movement like the imagination of the right/left hands, etc. In this paper, we sought to validate and enhance our previously proposed angle-amplitude transformation (AAT) technique, which is a simple signal-to-image transformation approach for the classification of EEG and MEG signals. For this purpose, we diversified our previous method and proposed four new angle-amplitude graph (AAG) representation methods for AAT transformation. These modifications were made on some points such as using different left/right side changing points at a different distance. To confirm the validity of the proposed methods, we performed experiments on the BCI Competition III Dataset IIIa, which is a benchmark dataset widely used for EEG-based multi-class motor imagery tasks. The procedure of proposed methods can be summarized in a concise manner as follows: (i) convert EEG signals to AAG images by using the proposed AAT transformation approaches; (ii) extract image features by employing Scale Invariant Feature Transform (SIFT)-based Bag of Visual Word (BoW); and (iii) classify features with k -Nearest Neighbor ( k NN) algorithm. Experimental results showed that the changes in the baseline AAT approaches enhanced the classification performance on Dataset IIIa with an accuracy of 96.50% for two-class problem (left/right hand movement imaginations) and 97.99% for four-class problem (left/right hand, foot and tongue movement imaginations). These achievements are mainly due to the help of effective enhancements on AAG image representations. Graphical Abstract The flow diagram of the proposed methodology.
ISSN:0140-0118
1741-0444
DOI:10.1007/s11517-019-02075-x