Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour
The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike...
Gespeichert in:
Veröffentlicht in: | The British journal for the philosophy of science 2023-09, Vol.74 (3), p.681-712 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 712 |
---|---|
container_issue | 3 |
container_start_page | 681 |
container_title | The British journal for the philosophy of science |
container_volume | 74 |
creator | Buckner, Cameron |
description | The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates. |
doi_str_mv | 10.1086/714960 |
format | Article |
fullrecord | <record><control><sourceid>crossref_uchic</sourceid><recordid>TN_cdi_crossref_primary_10_1086_714960</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1086_714960</sourcerecordid><originalsourceid>FETCH-LOGICAL-c252t-4d044326a8c6b5b046c7e8b9f472ab5ee07a25a8bf16b9e1c1cff757cf65c8e03</originalsourceid><addsrcrecordid>eNo1kMtOwzAQRS0EEqXAN3iB2AXsxK-sEKnKQ2rFArpgFU3MuHVJ48pOK_h7ggqrq9E9Gh1dQi45u-HMqFvNRanYERlxoURWyEIfkxFjTGfMmPyUnKW0Hk6lSjEi71UL9pNW4QsTDZEuOtdC32P03ZLOfYwhpjs6CZstROj9HmnlIVHf0X6F9NV67CzS4Ogc7Mp3Q40r2Puwi-fkxEGb8OIvx2TxMH2bPGWzl8fnyf0ss7nM-0x8MCGKXIGxqpENE8pqNE3phM6hkYhMQy7BNI6rpkRuuXVOS22dktYgK8bk-vDXxpBSRFdvo99A_K45q38HqQ-DDODVAdwNphaWYRsxpXo9uHaD4D_2A5YJX2g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour</title><source>Alma/SFX Local Collection</source><creator>Buckner, Cameron</creator><creatorcontrib>Buckner, Cameron</creatorcontrib><description>The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates.</description><identifier>ISSN: 0007-0882</identifier><identifier>EISSN: 1464-3537</identifier><identifier>DOI: 10.1086/714960</identifier><language>eng</language><publisher>The University of Chicago Press</publisher><ispartof>The British journal for the philosophy of science, 2023-09, Vol.74 (3), p.681-712</ispartof><rights>The British Society for the Philosophy of Science. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c252t-4d044326a8c6b5b046c7e8b9f472ab5ee07a25a8bf16b9e1c1cff757cf65c8e03</citedby><cites>FETCH-LOGICAL-c252t-4d044326a8c6b5b046c7e8b9f472ab5ee07a25a8bf16b9e1c1cff757cf65c8e03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Buckner, Cameron</creatorcontrib><title>Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour</title><title>The British journal for the philosophy of science</title><description>The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates.</description><issn>0007-0882</issn><issn>1464-3537</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo1kMtOwzAQRS0EEqXAN3iB2AXsxK-sEKnKQ2rFArpgFU3MuHVJ48pOK_h7ggqrq9E9Gh1dQi45u-HMqFvNRanYERlxoURWyEIfkxFjTGfMmPyUnKW0Hk6lSjEi71UL9pNW4QsTDZEuOtdC32P03ZLOfYwhpjs6CZstROj9HmnlIVHf0X6F9NV67CzS4Ogc7Mp3Q40r2Puwi-fkxEGb8OIvx2TxMH2bPGWzl8fnyf0ss7nM-0x8MCGKXIGxqpENE8pqNE3phM6hkYhMQy7BNI6rpkRuuXVOS22dktYgK8bk-vDXxpBSRFdvo99A_K45q38HqQ-DDODVAdwNphaWYRsxpXo9uHaD4D_2A5YJX2g</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Buckner, Cameron</creator><general>The University of Chicago Press</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230901</creationdate><title>Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour</title><author>Buckner, Cameron</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c252t-4d044326a8c6b5b046c7e8b9f472ab5ee07a25a8bf16b9e1c1cff757cf65c8e03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Buckner, Cameron</creatorcontrib><collection>CrossRef</collection><jtitle>The British journal for the philosophy of science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Buckner, Cameron</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour</atitle><jtitle>The British journal for the philosophy of science</jtitle><date>2023-09-01</date><risdate>2023</risdate><volume>74</volume><issue>3</issue><spage>681</spage><epage>712</epage><pages>681-712</pages><issn>0007-0882</issn><eissn>1464-3537</eissn><abstract>The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates.</abstract><pub>The University of Chicago Press</pub><doi>10.1086/714960</doi><tpages>32</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0007-0882 |
ispartof | The British journal for the philosophy of science, 2023-09, Vol.74 (3), p.681-712 |
issn | 0007-0882 1464-3537 |
language | eng |
recordid | cdi_crossref_primary_10_1086_714960 |
source | Alma/SFX Local Collection |
title | Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T23%3A00%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_uchic&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Black%20Boxes%20or%20Unflattering%20Mirrors?%20Comparative%20Bias%20in%20the%20Science%20of%20Machine%20Behaviour&rft.jtitle=The%20British%20journal%20for%20the%20philosophy%20of%20science&rft.au=Buckner,%20Cameron&rft.date=2023-09-01&rft.volume=74&rft.issue=3&rft.spage=681&rft.epage=712&rft.pages=681-712&rft.issn=0007-0882&rft.eissn=1464-3537&rft_id=info:doi/10.1086/714960&rft_dat=%3Ccrossref_uchic%3E10_1086_714960%3C/crossref_uchic%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |