Virtuous vs. utilitarian artificial moral agents
Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than ot...
Gespeichert in:
Veröffentlicht in: | AI & society 2020-03, Vol.35 (1), p.263-271 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a two-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well suited for machine ethics. |
---|---|
ISSN: | 0951-5666 1435-5655 |
DOI: | 10.1007/s00146-018-0871-3 |