Revealing Vision-Language Integration in the Brain with Multimodal Networks
We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoencephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-lang...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We use (multi)modal deep neural networks (DNNs) to probe for sites of
multimodal integration in the human brain by predicting stereoencephalography
(SEEG) recordings taken while human subjects watched movies. We operationalize
sites of multimodal integration as regions where a multimodal vision-language
model predicts recordings better than unimodal language, unimodal vision, or
linearly-integrated language-vision models. Our target DNN models span
different architectures (e.g., convolutional networks and transformers) and
multimodal training techniques (e.g., cross-attention and contrastive
learning). As a key enabling step, we first demonstrate that trained vision and
language models systematically outperform their randomly initialized
counterparts in their ability to predict SEEG signals. We then compare unimodal
and multimodal models against one another. Because our target DNN models often
have different architectures, number of parameters, and training sets (possibly
obscuring those differences attributable to integration), we carry out a
controlled comparison of two models (SLIP and SimCLR), which keep all of these
attributes the same aside from input modality. Using this approach, we identify
a sizable number of neural sites (on average 141 out of 1090 total sites or
12.94%) and brain regions where multimodal integration seems to occur.
Additionally, we find that among the variants of multimodal training techniques
we assess, CLIP-style training is the best suited for downstream prediction of
the neural activity in these sites. |
---|---|
DOI: | 10.48550/arxiv.2406.14481 |