When Trust is Zero Sum: Automation Threat to Epistemic Agency

AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from aut...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Malone, Emmie, Afroogh, Saleh, DCruz, Jason, Varshney, Kush R
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Malone, Emmie
Afroogh, Saleh
DCruz, Jason
Varshney, Kush R
description AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to. This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.
doi_str_mv 10.48550/arxiv.2408.08846
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_08846</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_08846</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_088463</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DOwsDAx42SwDc9IzVMIKSotLlHILFaISi3KVwguzbVScCwtyc9NLMnMB8pmFKUmliiU5Cu4FmQWl6TmZiYrOKan5iVX8jCwpiXmFKfyQmluBnk31xBnD12wRfEFRZm5iUWV8SAL48EWGhNWAQDznTUO</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>When Trust is Zero Sum: Automation Threat to Epistemic Agency</title><source>arXiv.org</source><creator>Malone, Emmie ; Afroogh, Saleh ; DCruz, Jason ; Varshney, Kush R</creator><creatorcontrib>Malone, Emmie ; Afroogh, Saleh ; DCruz, Jason ; Varshney, Kush R</creatorcontrib><description>AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to. This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.</description><identifier>DOI: 10.48550/arxiv.2408.08846</identifier><language>eng</language><subject>Computer Science - Computers and Society</subject><creationdate>2024-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.08846$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.08846$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Malone, Emmie</creatorcontrib><creatorcontrib>Afroogh, Saleh</creatorcontrib><creatorcontrib>DCruz, Jason</creatorcontrib><creatorcontrib>Varshney, Kush R</creatorcontrib><title>When Trust is Zero Sum: Automation Threat to Epistemic Agency</title><description>AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to. This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.</description><subject>Computer Science - Computers and Society</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DOwsDAx42SwDc9IzVMIKSotLlHILFaISi3KVwguzbVScCwtyc9NLMnMB8pmFKUmliiU5Cu4FmQWl6TmZiYrOKan5iVX8jCwpiXmFKfyQmluBnk31xBnD12wRfEFRZm5iUWV8SAL48EWGhNWAQDznTUO</recordid><startdate>20240816</startdate><enddate>20240816</enddate><creator>Malone, Emmie</creator><creator>Afroogh, Saleh</creator><creator>DCruz, Jason</creator><creator>Varshney, Kush R</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240816</creationdate><title>When Trust is Zero Sum: Automation Threat to Epistemic Agency</title><author>Malone, Emmie ; Afroogh, Saleh ; DCruz, Jason ; Varshney, Kush R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_088463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computers and Society</topic><toplevel>online_resources</toplevel><creatorcontrib>Malone, Emmie</creatorcontrib><creatorcontrib>Afroogh, Saleh</creatorcontrib><creatorcontrib>DCruz, Jason</creatorcontrib><creatorcontrib>Varshney, Kush R</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Malone, Emmie</au><au>Afroogh, Saleh</au><au>DCruz, Jason</au><au>Varshney, Kush R</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>When Trust is Zero Sum: Automation Threat to Epistemic Agency</atitle><date>2024-08-16</date><risdate>2024</risdate><abstract>AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to. This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.</abstract><doi>10.48550/arxiv.2408.08846</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.08846
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_08846
source arXiv.org
subjects Computer Science - Computers and Society
title When Trust is Zero Sum: Automation Threat to Epistemic Agency
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T09%3A02%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=When%20Trust%20is%20Zero%20Sum:%20Automation%20Threat%20to%20Epistemic%20Agency&rft.au=Malone,%20Emmie&rft.date=2024-08-16&rft_id=info:doi/10.48550/arxiv.2408.08846&rft_dat=%3Carxiv_GOX%3E2408_08846%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true