Assumptions About Algorithms’ Capacity for Discrimination
Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be...
Gespeichert in:
Veröffentlicht in: | Personality & social psychology bulletin 2022-04, Vol.48 (4), p.582-595 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 595 |
---|---|
container_issue | 4 |
container_start_page | 582 |
container_title | Personality & social psychology bulletin |
container_volume | 48 |
creator | Jago, Arthur S. Laurin, Kristin |
description | Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people’s increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use. |
doi_str_mv | 10.1177/01461672211016187 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2534613644</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_01461672211016187</sage_id><sourcerecordid>2637949494</sourcerecordid><originalsourceid>FETCH-LOGICAL-c368t-7046d37e405457b0991ef80a359bbe1a98d384faabc746dab850d20a0100fc943</originalsourceid><addsrcrecordid>eNp1kM9KxDAQxoMo7rr6AF6k4MVL15kmaVo8lfUvLHjRc0nbdO3SNjVpD3vzNXw9n8SUXRUUyWEC8_u-mfkIOUWYIwpxCchCDEUQIAKGGIk9MkXOA18wSvfJdOz7IzAhR9auAYCFLDgkE8qAuW80JVeJtUPT9ZVurZdkeui9pF5pU_Uvjf14e_cWspN51W-8UhvvurK5qZqqlaPgmByUsrbqZFdn5Pn25mlx7y8f7x4WydLPaRj1vnBTCyoUA864yCCOUZURSMrjLFMo46igESulzHLhSJlFHIoAJCBAmceMzsjF1rcz-nVQtk8bt4eqa9kqPdg04NTlQEM2oue_0LUeTOu2S4OQipiNz1G4pXKjrTWqTDt3lTSbFCEdk03_JOs0ZzvnIWtU8a34itIB8y1g5Ur9jP3f8RMzun9g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2637949494</pqid></control><display><type>article</type><title>Assumptions About Algorithms’ Capacity for Discrimination</title><source>Applied Social Sciences Index & Abstracts (ASSIA)</source><source>MEDLINE</source><source>SAGE Complete</source><source>Sociological Abstracts</source><creator>Jago, Arthur S. ; Laurin, Kristin</creator><creatorcontrib>Jago, Arthur S. ; Laurin, Kristin</creatorcontrib><description>Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people’s increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use.</description><identifier>ISSN: 0146-1672</identifier><identifier>EISSN: 1552-7433</identifier><identifier>DOI: 10.1177/01461672211016187</identifier><identifier>PMID: 34044648</identifier><language>eng</language><publisher>Los Angeles, CA: SAGE Publications</publisher><subject>Algorithms ; Discrimination ; Emotions ; Humans ; Judgment ; Optimism</subject><ispartof>Personality & social psychology bulletin, 2022-04, Vol.48 (4), p.582-595</ispartof><rights>2021 by the Society for Personality and Social Psychology, Inc</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c368t-7046d37e405457b0991ef80a359bbe1a98d384faabc746dab850d20a0100fc943</citedby><cites>FETCH-LOGICAL-c368t-7046d37e405457b0991ef80a359bbe1a98d384faabc746dab850d20a0100fc943</cites><orcidid>0000-0002-6009-047X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/01461672211016187$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/01461672211016187$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,776,780,21798,27901,27902,30976,33751,43597,43598</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34044648$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jago, Arthur S.</creatorcontrib><creatorcontrib>Laurin, Kristin</creatorcontrib><title>Assumptions About Algorithms’ Capacity for Discrimination</title><title>Personality & social psychology bulletin</title><addtitle>Pers Soc Psychol Bull</addtitle><description>Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people’s increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use.</description><subject>Algorithms</subject><subject>Discrimination</subject><subject>Emotions</subject><subject>Humans</subject><subject>Judgment</subject><subject>Optimism</subject><issn>0146-1672</issn><issn>1552-7433</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>7QJ</sourceid><sourceid>BHHNA</sourceid><recordid>eNp1kM9KxDAQxoMo7rr6AF6k4MVL15kmaVo8lfUvLHjRc0nbdO3SNjVpD3vzNXw9n8SUXRUUyWEC8_u-mfkIOUWYIwpxCchCDEUQIAKGGIk9MkXOA18wSvfJdOz7IzAhR9auAYCFLDgkE8qAuW80JVeJtUPT9ZVurZdkeui9pF5pU_Uvjf14e_cWspN51W-8UhvvurK5qZqqlaPgmByUsrbqZFdn5Pn25mlx7y8f7x4WydLPaRj1vnBTCyoUA864yCCOUZURSMrjLFMo46igESulzHLhSJlFHIoAJCBAmceMzsjF1rcz-nVQtk8bt4eqa9kqPdg04NTlQEM2oue_0LUeTOu2S4OQipiNz1G4pXKjrTWqTDt3lTSbFCEdk03_JOs0ZzvnIWtU8a34itIB8y1g5Ur9jP3f8RMzun9g</recordid><startdate>202204</startdate><enddate>202204</enddate><creator>Jago, Arthur S.</creator><creator>Laurin, Kristin</creator><general>SAGE Publications</general><general>SAGE PUBLICATIONS, INC</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QJ</scope><scope>7U4</scope><scope>BHHNA</scope><scope>DWI</scope><scope>WZK</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-6009-047X</orcidid></search><sort><creationdate>202204</creationdate><title>Assumptions About Algorithms’ Capacity for Discrimination</title><author>Jago, Arthur S. ; Laurin, Kristin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c368t-7046d37e405457b0991ef80a359bbe1a98d384faabc746dab850d20a0100fc943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Discrimination</topic><topic>Emotions</topic><topic>Humans</topic><topic>Judgment</topic><topic>Optimism</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jago, Arthur S.</creatorcontrib><creatorcontrib>Laurin, Kristin</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Applied Social Sciences Index & Abstracts (ASSIA)</collection><collection>Sociological Abstracts (pre-2017)</collection><collection>Sociological Abstracts</collection><collection>Sociological Abstracts</collection><collection>Sociological Abstracts (Ovid)</collection><collection>MEDLINE - Academic</collection><jtitle>Personality & social psychology bulletin</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jago, Arthur S.</au><au>Laurin, Kristin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Assumptions About Algorithms’ Capacity for Discrimination</atitle><jtitle>Personality & social psychology bulletin</jtitle><addtitle>Pers Soc Psychol Bull</addtitle><date>2022-04</date><risdate>2022</risdate><volume>48</volume><issue>4</issue><spage>582</spage><epage>595</epage><pages>582-595</pages><issn>0146-1672</issn><eissn>1552-7433</eissn><abstract>Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people’s increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use.</abstract><cop>Los Angeles, CA</cop><pub>SAGE Publications</pub><pmid>34044648</pmid><doi>10.1177/01461672211016187</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-6009-047X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0146-1672 |
ispartof | Personality & social psychology bulletin, 2022-04, Vol.48 (4), p.582-595 |
issn | 0146-1672 1552-7433 |
language | eng |
recordid | cdi_proquest_miscellaneous_2534613644 |
source | Applied Social Sciences Index & Abstracts (ASSIA); MEDLINE; SAGE Complete; Sociological Abstracts |
subjects | Algorithms Discrimination Emotions Humans Judgment Optimism |
title | Assumptions About Algorithms’ Capacity for Discrimination |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T13%3A58%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Assumptions%20About%20Algorithms%E2%80%99%20Capacity%20for%20Discrimination&rft.jtitle=Personality%20&%20social%20psychology%20bulletin&rft.au=Jago,%20Arthur%20S.&rft.date=2022-04&rft.volume=48&rft.issue=4&rft.spage=582&rft.epage=595&rft.pages=582-595&rft.issn=0146-1672&rft.eissn=1552-7433&rft_id=info:doi/10.1177/01461672211016187&rft_dat=%3Cproquest_cross%3E2637949494%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2637949494&rft_id=info:pmid/34044648&rft_sage_id=10.1177_01461672211016187&rfr_iscdi=true |