A large and rich EEG dataset for modeling human visual object recognition
Dataset motivation and summaryThe human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techni...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Dataset |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dataset motivation and summaryThe human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions coming from the THINGS database. We release this dataset as a tool to foster research in visual neuroscience and computer vision.Useful materialAdditional dataset informationFor information regarding the experimental paradigm, the EEG recording protocol and the dataset validation through computational modeling analyses please refer to our paper.Additional dataset resourcesPlease visit the dataset page for the paper, dataset tutorial, code and more.OSFFor additional data and resources visit our OSF project, where you can find:A detailed description of the raw EEG data filesThe preprocessed EEG dataThe stimuli imagesThe EEG resting state dataCitationsIf you use any of our data, please cite our paper. |
---|---|
DOI: | 10.25452/figshare.plus.18470912 |