Selective auditory attention in CI users - EEG data

Experiment We conducted a selective auditory attention experiment, where stimuli were presented in the free room over two loudspeakers separated by 60°. Before each trial, participants received instructions directing their attention to a specific audiobook. This guidance was provided visually on a s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jehn, Constantin, Reichenbach, Tobias, Hahne, Anja, Vavatzanidis, Niki, Kossmann, Adrian
Format: Dataset
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Experiment We conducted a selective auditory attention experiment, where stimuli were presented in the free room over two loudspeakers separated by 60°. Before each trial, participants received instructions directing their attention to a specific audiobook. This guidance was provided visually on a screen, featuring a symbol indicating the selected audiobook’s direction. Initially, we conducted eight trials in a single-speaker scenario, where only one audiobook was presented from alternating sides. Each trial lasted approximately two minutes. After every two trials, the presented story changed. In twelve subsequent trials, we implemented a competing-speaker paradigm, where two stories were presented simultaneously. However, the distractor story started 10 s later, affording participants time to discern the target speaker. We further randomized the organization by starting with the block of s2 stimuli instead of s1 for every second participant.    EEG Recording We collected EEG data using an actiCHamp System (BrainProducts GmbH, Germany) equipped with 32 electrodes. For each participant between two and four electrodes were removed due to their proximity to the CI magnet and sound processor. The sampling rate was set at 1 kHz, and an online low-pass filter with a cutoff frequency of 280 Hz was implemented. Prior to the experiment, electrode impedances were maintained below 20 kΩ. We monitored the impedances throughboth the single-speaker and competing-speaker scenarios, and, if needed, applied additional conductive gel to ensure that the impedances remained below the threshold of 20 kΩ. For synchronizing the audio and the EEG recording, we used an audio splitter and recorded the presented audio as two auxiliary channel over the EEG recorder with two StimTraks (BrainProducts GmbH, Germany) as adapter. We performed an offline correlation analysis between the recorded audio signal and delayed versions of the clean stimuli. The delay with highest value in Pearson’s r is used to align the respective stimuli with the EEG recording. Additionally, we sent onse Dataset We provide the data in hdf5 format. It includes: The EEG recording (raw and ICA-cleaned (read the paper for the method)) Stimuli and precalculated features (speech envelope and onset envelope) For organization of the file and an example of how to read the data using python, see the hdf5_dataset_info.txt file.
DOI:10.5281/zenodo.10980116