Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability

The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Grzankowski, Alex
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Grzankowski, Alex
description The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.
doi_str_mv 10.48550/arxiv.2402.00901
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_00901</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_00901</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-5447d5431e49522409bfd9f3e2ebda0b238d28bee9057380a6cb4c8e397b64d43</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoqT9gK6qH7Ar62FLyxD6MAQKSfbmyrpqRRXFKKI0f1_H7WrgMAxzCHloWC21UuwJ8k_4rrlkvGbMsOaW7HcIke4nyF9nevJ0nUvwYQwz7FPBGMMHphEpJEfLJ9L-OJ1ygSua231KmJdinjIWsCGGcrkjNx7iGe__c0UOL8-HzVu1fX_tN-ttBW3XVErKzikpGpRG8fmTsd4ZL5CjdcAsF9pxbRENU53QDNrRylGjMJ1tpZNiRR7_ZherYcrhCPkyXO2GxU78AhhESnE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability</title><source>arXiv.org</source><creator>Grzankowski, Alex</creator><creatorcontrib>Grzankowski, Alex</creatorcontrib><description>The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.</description><identifier>DOI: 10.48550/arxiv.2402.00901</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.00901$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.00901$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Grzankowski, Alex</creatorcontrib><title>Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability</title><description>The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoqT9gK6qH7Ar62FLyxD6MAQKSfbmyrpqRRXFKKI0f1_H7WrgMAxzCHloWC21UuwJ8k_4rrlkvGbMsOaW7HcIke4nyF9nevJ0nUvwYQwz7FPBGMMHphEpJEfLJ9L-OJ1ygSua231KmJdinjIWsCGGcrkjNx7iGe__c0UOL8-HzVu1fX_tN-ttBW3XVErKzikpGpRG8fmTsd4ZL5CjdcAsF9pxbRENU53QDNrRylGjMJ1tpZNiRR7_ZherYcrhCPkyXO2GxU78AhhESnE</recordid><startdate>20240131</startdate><enddate>20240131</enddate><creator>Grzankowski, Alex</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240131</creationdate><title>Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability</title><author>Grzankowski, Alex</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-5447d5431e49522409bfd9f3e2ebda0b238d28bee9057380a6cb4c8e397b64d43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Grzankowski, Alex</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Grzankowski, Alex</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability</atitle><date>2024-01-31</date><risdate>2024</risdate><abstract>The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.</abstract><doi>10.48550/arxiv.2402.00901</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.00901
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_00901
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T06%3A04%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Real%20Sparks%20of%20Artificial%20Intelligence%20and%20the%20Importance%20of%20Inner%20Interpretability&rft.au=Grzankowski,%20Alex&rft.date=2024-01-31&rft_id=info:doi/10.48550/arxiv.2402.00901&rft_dat=%3Carxiv_GOX%3E2402_00901%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true