Using deep neural networks to disentangle visual and semantic information in human perception and memory
Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or...
Gespeichert in:
Veröffentlicht in: | Nature human behaviour 2024-04, Vol.8 (4), p.702-717 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 717 |
---|---|
container_issue | 4 |
container_start_page | 702 |
container_title | Nature human behaviour |
container_volume | 8 |
creator | Shoham, Adva Grosbard, Idan Daniel Patashnik, Or Cohen-Or, Daniel Yovel, Galit |
description | Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Here Shoham and colleagues use deep learning algorithms to disentangle the contributions of visual, visual–semantic and semantic information in human face and object representations. Visual–semantic and semantic algorithms improve prediction of human representations. |
doi_str_mv | 10.1038/s41562-024-01816-9 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2925035441</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2925035441</sourcerecordid><originalsourceid>FETCH-LOGICAL-c326t-f24927aa700d8f623a750ff3d9b7e49c49f1ddd746982e164b84849d0c28a3413</originalsourceid><addsrcrecordid>eNp9kU1P5DAMhiO0CBDwBzisIu1lL4UkTtPmiBBfEhIXOEeZxh0KbdJN2kX8ezIzfIkDJ1v249eWX0KOODvmDOqTJHmpRMGELBivuSr0FtkToKsCoJK_vuS75DClR8YY1yB1pXbILtQAAkDvkYf71PkldYgj9ThH2-cwPYf4lOgUqOsS-sn6ZY_0f5fm3Lbe0YSD9VPX0M63IQ526oLPOX2Yc52OGBsc17UVPOAQ4ssB2W5tn_DwLe6T-4vzu7Or4ub28vrs9KZoQKipaIXUorK2YszVrRJgq5K1LTi9qFDqRuqWO-cqqXQtkCu5qGUttWONqC1IDvvk70Z3jOHfjGkyQ5ca7HvrMczJCC1KBqVco3--oY9hjj5fZ4BJxbQq-YoSG6qJIaWIrRljN9j4YjgzKyvMxgqTrTBrK4zOQ7_fpOfFgO5j5P3xGYANkHLLLzF-7v5B9hXwDJQa</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3046096511</pqid></control><display><type>article</type><title>Using deep neural networks to disentangle visual and semantic information in human perception and memory</title><source>Nature</source><source>SpringerLink Journals - AutoHoldings</source><creator>Shoham, Adva ; Grosbard, Idan Daniel ; Patashnik, Or ; Cohen-Or, Daniel ; Yovel, Galit</creator><creatorcontrib>Shoham, Adva ; Grosbard, Idan Daniel ; Patashnik, Or ; Cohen-Or, Daniel ; Yovel, Galit</creatorcontrib><description>Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Here Shoham and colleagues use deep learning algorithms to disentangle the contributions of visual, visual–semantic and semantic information in human face and object representations. Visual–semantic and semantic algorithms improve prediction of human representations.</description><identifier>ISSN: 2397-3374</identifier><identifier>EISSN: 2397-3374</identifier><identifier>DOI: 10.1038/s41562-024-01816-9</identifier><identifier>PMID: 38332339</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>4014/477/2811 ; 631/477/2811 ; Algorithms ; Behavioral Sciences ; Biomedical and Life Sciences ; Experimental Psychology ; Humans ; Life Sciences ; Memory ; Mental representation ; Microeconomics ; Neural networks ; Neurosciences ; Pairing ; Personality and Social Psychology ; Recall ; Semantics</subject><ispartof>Nature human behaviour, 2024-04, Vol.8 (4), p.702-717</ispartof><rights>The Author(s), under exclusive licence to Springer Nature Limited 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>2024. The Author(s), under exclusive licence to Springer Nature Limited.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c326t-f24927aa700d8f623a750ff3d9b7e49c49f1ddd746982e164b84849d0c28a3413</cites><orcidid>0000-0002-7923-2474 ; 0000-0001-7757-6137 ; 0000-0003-0971-2357</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1038/s41562-024-01816-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1038/s41562-024-01816-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38332339$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Shoham, Adva</creatorcontrib><creatorcontrib>Grosbard, Idan Daniel</creatorcontrib><creatorcontrib>Patashnik, Or</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><creatorcontrib>Yovel, Galit</creatorcontrib><title>Using deep neural networks to disentangle visual and semantic information in human perception and memory</title><title>Nature human behaviour</title><addtitle>Nat Hum Behav</addtitle><addtitle>Nat Hum Behav</addtitle><description>Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Here Shoham and colleagues use deep learning algorithms to disentangle the contributions of visual, visual–semantic and semantic information in human face and object representations. Visual–semantic and semantic algorithms improve prediction of human representations.</description><subject>4014/477/2811</subject><subject>631/477/2811</subject><subject>Algorithms</subject><subject>Behavioral Sciences</subject><subject>Biomedical and Life Sciences</subject><subject>Experimental Psychology</subject><subject>Humans</subject><subject>Life Sciences</subject><subject>Memory</subject><subject>Mental representation</subject><subject>Microeconomics</subject><subject>Neural networks</subject><subject>Neurosciences</subject><subject>Pairing</subject><subject>Personality and Social Psychology</subject><subject>Recall</subject><subject>Semantics</subject><issn>2397-3374</issn><issn>2397-3374</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kU1P5DAMhiO0CBDwBzisIu1lL4UkTtPmiBBfEhIXOEeZxh0KbdJN2kX8ezIzfIkDJ1v249eWX0KOODvmDOqTJHmpRMGELBivuSr0FtkToKsCoJK_vuS75DClR8YY1yB1pXbILtQAAkDvkYf71PkldYgj9ThH2-cwPYf4lOgUqOsS-sn6ZY_0f5fm3Lbe0YSD9VPX0M63IQ526oLPOX2Yc52OGBsc17UVPOAQ4ssB2W5tn_DwLe6T-4vzu7Or4ub28vrs9KZoQKipaIXUorK2YszVrRJgq5K1LTi9qFDqRuqWO-cqqXQtkCu5qGUttWONqC1IDvvk70Z3jOHfjGkyQ5ca7HvrMczJCC1KBqVco3--oY9hjj5fZ4BJxbQq-YoSG6qJIaWIrRljN9j4YjgzKyvMxgqTrTBrK4zOQ7_fpOfFgO5j5P3xGYANkHLLLzF-7v5B9hXwDJQa</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Shoham, Adva</creator><creator>Grosbard, Idan Daniel</creator><creator>Patashnik, Or</creator><creator>Cohen-Or, Daniel</creator><creator>Yovel, Galit</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8BJ</scope><scope>FQK</scope><scope>JBE</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7923-2474</orcidid><orcidid>https://orcid.org/0000-0001-7757-6137</orcidid><orcidid>https://orcid.org/0000-0003-0971-2357</orcidid></search><sort><creationdate>20240401</creationdate><title>Using deep neural networks to disentangle visual and semantic information in human perception and memory</title><author>Shoham, Adva ; Grosbard, Idan Daniel ; Patashnik, Or ; Cohen-Or, Daniel ; Yovel, Galit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c326t-f24927aa700d8f623a750ff3d9b7e49c49f1ddd746982e164b84849d0c28a3413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>4014/477/2811</topic><topic>631/477/2811</topic><topic>Algorithms</topic><topic>Behavioral Sciences</topic><topic>Biomedical and Life Sciences</topic><topic>Experimental Psychology</topic><topic>Humans</topic><topic>Life Sciences</topic><topic>Memory</topic><topic>Mental representation</topic><topic>Microeconomics</topic><topic>Neural networks</topic><topic>Neurosciences</topic><topic>Pairing</topic><topic>Personality and Social Psychology</topic><topic>Recall</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shoham, Adva</creatorcontrib><creatorcontrib>Grosbard, Idan Daniel</creatorcontrib><creatorcontrib>Patashnik, Or</creatorcontrib><creatorcontrib>Cohen-Or, Daniel</creatorcontrib><creatorcontrib>Yovel, Galit</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><collection>MEDLINE - Academic</collection><jtitle>Nature human behaviour</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shoham, Adva</au><au>Grosbard, Idan Daniel</au><au>Patashnik, Or</au><au>Cohen-Or, Daniel</au><au>Yovel, Galit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using deep neural networks to disentangle visual and semantic information in human perception and memory</atitle><jtitle>Nature human behaviour</jtitle><stitle>Nat Hum Behav</stitle><addtitle>Nat Hum Behav</addtitle><date>2024-04-01</date><risdate>2024</risdate><volume>8</volume><issue>4</issue><spage>702</spage><epage>717</epage><pages>702-717</pages><issn>2397-3374</issn><eissn>2397-3374</eissn><abstract>Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual–semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual–semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Here Shoham and colleagues use deep learning algorithms to disentangle the contributions of visual, visual–semantic and semantic information in human face and object representations. Visual–semantic and semantic algorithms improve prediction of human representations.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>38332339</pmid><doi>10.1038/s41562-024-01816-9</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-7923-2474</orcidid><orcidid>https://orcid.org/0000-0001-7757-6137</orcidid><orcidid>https://orcid.org/0000-0003-0971-2357</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2397-3374 |
ispartof | Nature human behaviour, 2024-04, Vol.8 (4), p.702-717 |
issn | 2397-3374 2397-3374 |
language | eng |
recordid | cdi_proquest_miscellaneous_2925035441 |
source | Nature; SpringerLink Journals - AutoHoldings |
subjects | 4014/477/2811 631/477/2811 Algorithms Behavioral Sciences Biomedical and Life Sciences Experimental Psychology Humans Life Sciences Memory Mental representation Microeconomics Neural networks Neurosciences Pairing Personality and Social Psychology Recall Semantics |
title | Using deep neural networks to disentangle visual and semantic information in human perception and memory |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T18%3A45%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20deep%20neural%20networks%20to%20disentangle%20visual%20and%20semantic%20information%20in%20human%20perception%20and%20memory&rft.jtitle=Nature%20human%20behaviour&rft.au=Shoham,%20Adva&rft.date=2024-04-01&rft.volume=8&rft.issue=4&rft.spage=702&rft.epage=717&rft.pages=702-717&rft.issn=2397-3374&rft.eissn=2397-3374&rft_id=info:doi/10.1038/s41562-024-01816-9&rft_dat=%3Cproquest_cross%3E2925035441%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3046096511&rft_id=info:pmid/38332339&rfr_iscdi=true |