Dual model transfer learning to compensate for individual variability in brain-computer interface
•Brain-computer interface technology uses deep neural networks to improve performance.•Complex models require extensive data, so many studies try to utilize group data.•Due to individual variability, simply pooled group data lacks decoding performance.•This study applies a particular layer that spec...
Gespeichert in:
Veröffentlicht in: | Computer methods and programs in biomedicine 2024-09, Vol.254, p.108294, Article 108294 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Brain-computer interface technology uses deep neural networks to improve performance.•Complex models require extensive data, so many studies try to utilize group data.•Due to individual variability, simply pooled group data lacks decoding performance.•This study applies a particular layer that specifically reflects individual variability.•Comparison with other methods proved the superiority of the method.
Recent advancements in brain-computer interface (BCI) technology have seen a significant shift towards incorporating complex decoding models such as deep neural networks (DNNs) to enhance performance. These models are particularly crucial for sophisticated tasks such as regression for decoding arbitrary movements. However, these BCI models trained and tested on individual data often face challenges with limited performance and generalizability across different subjects. This limitation is primarily due to a tremendous number of parameters of DNN models. Training complex models demands extensive datasets. Nevertheless, group data from many subjects may not produce sufficient decoding performance because of inherent variability in neural signals both across individuals and over time
To address these challenges, this study proposed a transfer learning approach that could effectively adapt to subject-specific variability in cortical regions. Our method involved training two separate movement decoding models: one on individual data and another on pooled group data. We then created a salience map for each cortical region from the individual model, which helped us identify the input's contribution variance across subjects. Based on the contribution variance, we combined individual and group models using a modified knowledge distillation framework. This approach allowed the group model to be universally applicable by assigning greater weights to input data, while the individual model was fine-tuned to focus on areas with significant individual variance
Our combined model effectively encapsulated individual variability. We validated this approach with nine subjects performing arm-reaching tasks, with our method outperforming (mean correlation coefficient, r = 0.75) both individual (r = 0.70) and group models (r = 0.40) in decoding performance. In particular, there were notable improvements in cases where individual models showed low performances (e.g., r = 0.50 in the individual decoder to r = 0.61 in the proposed decoder)
These results not only demonstrate the p |
---|---|
ISSN: | 0169-2607 1872-7565 1872-7565 |
DOI: | 10.1016/j.cmpb.2024.108294 |