Using deep neural networks to disentangle visual and semantic information in human perception and memory
Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or...
Gespeichert in:
Veröffentlicht in: | Nature human behaviour 2024-04, Vol.8 (4), p.702-717 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Here Shoham and colleagues use deep learning algorithms to disentangle the contributions of visual, visual–semantic and semantic information in human face and object representations. Visual–semantic and semantic algorithms improve prediction of human representations. |
---|---|
ISSN: | 2397-3374 2397-3374 |
DOI: | 10.1038/s41562-024-01816-9 |