A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia
Electroencephalographic (EEG) recordings generate an electrical map of the human brain that are useful for clinical inspection of patients and in biomedical smart Internet-of-Things (IoT) and Brain-Computer Interface (BCI) applications. From a signal processing perspective, EEGs yield a nonlinear an...
Gespeichert in:
Veröffentlicht in: | Neural networks 2020-03, Vol.123, p.176-190 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Electroencephalographic (EEG) recordings generate an electrical map of the human brain that are useful for clinical inspection of patients and in biomedical smart Internet-of-Things (IoT) and Brain-Computer Interface (BCI) applications. From a signal processing perspective, EEGs yield a nonlinear and nonstationary, multivariate representation of the underlying neural circuitry interactions. In this paper, a novel multi-modal Machine Learning (ML) based approach is proposed to integrate EEG engineered features for automatic classification of brain states. EEGs are acquired from neurological patients with Mild Cognitive Impairment (MCI) or Alzheimer’s disease (AD) and the aim is to discriminate Healthy Control (HC) subjects from patients. Specifically, in order to effectively cope with nonstationarities, 19-channels EEG signals are projected into the time–frequency (TF) domain by means of the Continuous Wavelet Transform (CWT) and a set of appropriate features (denoted as CWT features) are extracted from δ, θ, α1, α2, β EEG sub-bands. Furthermore, to exploit nonlinear phase-coupling information of EEG signals, higher order statistics (HOS) are extracted from the bispectrum (BiS) representation. BiS generates a second set of features (denoted as BiS features) which are also evaluated in the five EEG sub-bands. The CWT and BiS features are fed into a number of ML classifiers to perform both 2-way (AD vs. HC, AD vs. MCI, MCI vs. HC) and 3-way (AD vs. MCI vs. HC) classifications. As an experimental benchmark, a balanced EEG dataset that includes 63 AD, 63 MCI and 63 HC is analyzed. Comparative results show that when the concatenation of CWT and BiS features (denoted as multi-modal (CWT+BiS) features) is used as input, the Multi-Layer Perceptron (MLP) classifier outperforms all other models, specifically, the Autoencoder (AE), Logistic Regression (LR) and Support Vector Machine (SVM). Consequently, our proposed multi-modal ML scheme can be considered a viable alternative to state-of-the-art computationally intensive deep learning approaches. |
---|---|
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2019.12.006 |