Adversarial Defense Based on Denoising Convolutional Autoencoder in EEG-Based Brain-Computer Interfaces

The exploration and implementation of brain-computer interfaces (BCIs) utilizing electro- encephalography (EEG) are becoming increasingly widespread. However, their safety considerations have received scant attention. Recent studies have shown that EEG-based BCIs are vulnerable to adversarial attack...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.146441-146452
Hauptverfasser: Ding, Yongting, Li, Lin, Li, Qingyan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The exploration and implementation of brain-computer interfaces (BCIs) utilizing electro- encephalography (EEG) are becoming increasingly widespread. However, their safety considerations have received scant attention. Recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks. Remarkably, only a limited amount of literature has addressed adversarial defense strategies against EEG-based BCIs. This study introduces a defense approach based on autoencoders, termed the Denoising Convolutional Autoencoder (DCAE), which serves as a preprocessing unit preceding the classification model. The DCAE aims to mitigate adversarial disturbances prior to inputting samples into the classifier, thereby preserving the classifier's original structure. Experiments were conducted using two different EEG datasets and three convolutional neural network (CNN) models to evaluate the effectiveness of DCAE. The experimental results show that the proposed method can achieve better defense effect in most cases against various adversarial attack methods. Additionally, the sensitivity of the DCAE to different magnitudes of perturbation was evaluated. The findings indicate that the robustness of DCAE is not affected by the variation of attack intensity, a characteristic not observed in existing defense strategies for EEG-based BCIs. It is our aspiration that these results will advance the frontier of research on defending EEG-based BCIs against adversarial threats.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3467154