First-person representations and responsible agency in AI
In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control a...
Gespeichert in:
Veröffentlicht in: | Synthese (Dordrecht) 2021-12, Vol.199 (3-4), p.7061-7079 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or
de se
representations, which, in turn, are usually assumed to involve some form of consciousness. I clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of
de se
representations ultimately turns out to be. |
---|---|
ISSN: | 0039-7857 1573-0964 |
DOI: | 10.1007/s11229-021-03105-8 |