When Trust is Zero Sum: Automation Threat to Epistemic Agency
AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from aut...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AI researchers and ethicists have long worried about the threat that
automation poses to human dignity, autonomy, and to the sense of personal value
that is tied to work. Typically, proposed solutions to this problem focus on
ways in which we can reduce the number of job losses which result from
automation, ways to retrain those that lose their jobs, or ways to mitigate the
social consequences of those job losses. However, even in cases where workers
keep their jobs, their agency within them might be severely downgraded. For
instance, human employees might work alongside AI but not be allowed to make
decisions or not be allowed to make decisions without consulting with or coming
to agreement with the AI. This is a kind of epistemic harm (which could be an
injustice if it is distributed on the basis of identity prejudice). It
diminishes human agency (in constraining people's ability to act
independently), and it fails to recognize the workers' epistemic agency as
qualified experts. Workers, in this case, aren't given the trust they are
entitled to. This means that issues of human dignity remain even in cases where
everyone keeps their job. Further, job retention focused solutions, such as
designing an algorithm to work alongside the human employee, may only enable
these harms. Here, we propose an alternative design solution, adversarial
collaboration, which addresses the traditional retention problem of automation,
but also addresses the larger underlying problem of epistemic harms and the
distribution of trust between AI and humans in the workplace. |
---|---|
DOI: | 10.48550/arxiv.2408.08846 |