MIndGrasp: A New Training and Testing Framework for Motor Imagery Based 3-Dimensional Assistive Robotic Control
With increasing global age and disability assistive robots are becoming more necessary, and brain computer interfaces (BCI) are often proposed as a solution to understanding the intent of a disabled person that needs assistance. Most frameworks for electroencephalography (EEG)-based motor imagery (M...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With increasing global age and disability assistive robots are becoming more
necessary, and brain computer interfaces (BCI) are often proposed as a solution
to understanding the intent of a disabled person that needs assistance. Most
frameworks for electroencephalography (EEG)-based motor imagery (MI) BCI
control rely on the direct control of the robot in Cartesian space. However,
for 3-dimensional movement, this requires 6 motor imagery classes, which is a
difficult distinction even for more experienced BCI users. In this paper, we
present a simulated training and testing framework which reduces the number of
motor imagery classes to 4 while still grasping objects in three-dimensional
space. This is achieved through semi-autonomous eye-in-hand vision-based
control of the robotic arm, while the user-controlled BCI achieves movement to
the left and right, as well as movement toward and away from the object of
interest. Additionally, the framework includes a method of training a BCI
directly on the assistive robotic system, which should be more easily
transferrable to a real-world assistive robot than using a standard training
protocol such as Graz-BCI. Presented results do not consider real human EEG
data, but are rather shown as a baseline for comparison with future human data
and other improvements on the system. |
---|---|
DOI: | 10.48550/arxiv.2003.00369 |