Forms of explanation and understanding for neuroscience and artificial intelligence

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of neurophysiology 2021-12, Vol.126 (6), p.1860-1874
1. Verfasser: Thompson, Jessica A F
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1874
container_issue 6
container_start_page 1860
container_title Journal of neurophysiology
container_volume 126
creator Thompson, Jessica A F
description Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.
doi_str_mv 10.1152/jn.00195.2021
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2582114388</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2582114388</sourcerecordid><originalsourceid>FETCH-LOGICAL-c332t-4e9475d1e0a57ee4b74a9747430b943c435a6b29cb52b4ba6c1bef0c496e92233</originalsourceid><addsrcrecordid>eNo9kD1PwzAQhi0EoqEwsqKMLCn-jOMRVRSQKjEAs2U7l8pV4hQ7keDfk37AdK_uHp3uHoRuCV4QIujDNiwwJkosKKbkDGVTjxZEqOocZRhPmWEpZ-gqpS3GWApML9GM8ZJzQqsMva_62KW8b3L43rUmmMH3ITehzsdQQ0zDFH3Y5E0f8wBj7JPzEBwcEBMH33jnTZv7MEDb-s1-do0uGtMmuDnVOfpcPX0sX4r12_Pr8nFdOMboUHBQXIqaADZCAnAruVGSS86wVZw5zoQpLVXOCmq5NaUjFhrsuCpBUcrYHN0f9-5i_zVCGnTnk5vOMAH6MWkqKkoIZ1U1ocURddMHKUKjd9F3Jv5ogvXeo94GffCo9x4n_u60erQd1P_0nzj2C5m6bk0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2582114388</pqid></control><display><type>article</type><title>Forms of explanation and understanding for neuroscience and artificial intelligence</title><source>MEDLINE</source><source>American Physiological Society</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Alma/SFX Local Collection</source><creator>Thompson, Jessica A F</creator><creatorcontrib>Thompson, Jessica A F</creatorcontrib><description>Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.</description><identifier>ISSN: 0022-3077</identifier><identifier>EISSN: 1522-1598</identifier><identifier>DOI: 10.1152/jn.00195.2021</identifier><identifier>PMID: 34644128</identifier><language>eng</language><publisher>United States</publisher><subject>Artificial Intelligence ; Deep Learning ; Humans ; Neurosciences</subject><ispartof>Journal of neurophysiology, 2021-12, Vol.126 (6), p.1860-1874</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c332t-4e9475d1e0a57ee4b74a9747430b943c435a6b29cb52b4ba6c1bef0c496e92233</citedby><cites>FETCH-LOGICAL-c332t-4e9475d1e0a57ee4b74a9747430b943c435a6b29cb52b4ba6c1bef0c496e92233</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,3039,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34644128$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Thompson, Jessica A F</creatorcontrib><title>Forms of explanation and understanding for neuroscience and artificial intelligence</title><title>Journal of neurophysiology</title><addtitle>J Neurophysiol</addtitle><description>Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.</description><subject>Artificial Intelligence</subject><subject>Deep Learning</subject><subject>Humans</subject><subject>Neurosciences</subject><issn>0022-3077</issn><issn>1522-1598</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNo9kD1PwzAQhi0EoqEwsqKMLCn-jOMRVRSQKjEAs2U7l8pV4hQ7keDfk37AdK_uHp3uHoRuCV4QIujDNiwwJkosKKbkDGVTjxZEqOocZRhPmWEpZ-gqpS3GWApML9GM8ZJzQqsMva_62KW8b3L43rUmmMH3ITehzsdQQ0zDFH3Y5E0f8wBj7JPzEBwcEBMH33jnTZv7MEDb-s1-do0uGtMmuDnVOfpcPX0sX4r12_Pr8nFdOMboUHBQXIqaADZCAnAruVGSS86wVZw5zoQpLVXOCmq5NaUjFhrsuCpBUcrYHN0f9-5i_zVCGnTnk5vOMAH6MWkqKkoIZ1U1ocURddMHKUKjd9F3Jv5ogvXeo94GffCo9x4n_u60erQd1P_0nzj2C5m6bk0</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Thompson, Jessica A F</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20211201</creationdate><title>Forms of explanation and understanding for neuroscience and artificial intelligence</title><author>Thompson, Jessica A F</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c332t-4e9475d1e0a57ee4b74a9747430b943c435a6b29cb52b4ba6c1bef0c496e92233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial Intelligence</topic><topic>Deep Learning</topic><topic>Humans</topic><topic>Neurosciences</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Thompson, Jessica A F</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of neurophysiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Thompson, Jessica A F</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Forms of explanation and understanding for neuroscience and artificial intelligence</atitle><jtitle>Journal of neurophysiology</jtitle><addtitle>J Neurophysiol</addtitle><date>2021-12-01</date><risdate>2021</risdate><volume>126</volume><issue>6</issue><spage>1860</spage><epage>1874</epage><pages>1860-1874</pages><issn>0022-3077</issn><eissn>1522-1598</eissn><abstract>Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.</abstract><cop>United States</cop><pmid>34644128</pmid><doi>10.1152/jn.00195.2021</doi><tpages>15</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0022-3077
ispartof Journal of neurophysiology, 2021-12, Vol.126 (6), p.1860-1874
issn 0022-3077
1522-1598
language eng
recordid cdi_proquest_miscellaneous_2582114388
source MEDLINE; American Physiological Society; EZB-FREE-00999 freely available EZB journals; Alma/SFX Local Collection
subjects Artificial Intelligence
Deep Learning
Humans
Neurosciences
title Forms of explanation and understanding for neuroscience and artificial intelligence
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T11%3A41%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Forms%20of%20explanation%20and%20understanding%20for%20neuroscience%20and%20artificial%20intelligence&rft.jtitle=Journal%20of%20neurophysiology&rft.au=Thompson,%20Jessica%20A%20F&rft.date=2021-12-01&rft.volume=126&rft.issue=6&rft.spage=1860&rft.epage=1874&rft.pages=1860-1874&rft.issn=0022-3077&rft.eissn=1522-1598&rft_id=info:doi/10.1152/jn.00195.2021&rft_dat=%3Cproquest_cross%3E2582114388%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2582114388&rft_id=info:pmid/34644128&rfr_iscdi=true