Decoding text from electroencephalography signals: A novel Hierarchical Gated Recurrent Unit with Masked Residual Attention Mechanism
Progress in both neuroscience and natural language processing has opened doors for investigating brain to text techniques to reconstruct what individuals see, perceive, or focus on from human brain activity patterns. Non-invasive decoding, utilizing electroencephalography (EEG) signals, is preferred...
Gespeichert in:
Veröffentlicht in: | Engineering applications of artificial intelligence 2025-01, Vol.139, p.109615, Article 109615 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Progress in both neuroscience and natural language processing has opened doors for investigating brain to text techniques to reconstruct what individuals see, perceive, or focus on from human brain activity patterns. Non-invasive decoding, utilizing electroencephalography (EEG) signals, is preferred due to its comfort, cost-effectiveness, and portability. In brain-to-text applications, a pressing need has arisen to develop effective models that can accurately capture the intricate details of EEG signals, such as global and local contextual information and long-term dependencies. In response to this need, we propose the Hierarchical Gated Recurrent Unit with Masked Residual Attention Mechanism (HGRU-MRAM) model, which ingeniously combines the hierarchical structure and the masked residual attention mechanism to deliver a robust brain-to-text decoding system. Our experimental results on the ZuCo dataset demonstrate that this model significantly outperforms existing baselines, achieving state-of-the-art performance with Bilingual Evaluation Understudy Score (BLEU), Recall-Oriented Understudy for Gisting Evaluation (ROUGE), US National Institute of Standards and Technology Metric (NIST), Metric for Evaluation of Translation with Explicit Ordering (METEOR), Translation Edit Rate (TER), and BiLingual Evaluation Understudy with Representations from Transformers (BLEURT) scores of 48.29, 34.84, 4.07, 34.57, 21.98, and 40.45, respectively. The code is available at https://github.com/qpuchen/EEG-To-Sentence.
[Display omitted]
•Our model can directly decode brain activations into a coherent text.•We apply the HGRU decoder to infer structured representations from brain patterns.•Our proposed MRAM compensates the key information in the representation learning process. |
---|---|
ISSN: | 0952-1976 |
DOI: | 10.1016/j.engappai.2024.109615 |