Innocence over utilitarianism: Heightened moral standards for robots in rescue dilemmas

Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robot...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:European journal of social psychology 2023-06, Vol.53 (4), p.779-804
Hauptverfasser: Sundvall, Jukka, Drosinou, Marianna, Hannikainen, Ivar, Elovaara, Kaisa, Halonen, Juho, Herzon, Volo, Kopecký, Robin, Jirout Košová, Michaela, Koverola, Mika, Kunnari, Anton, Perander, Silva, Saikkonen, Teemu, Palomäki, Jussi, Laakasuo, Michael
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots’ and humans’ decisions in a rescue dilemma with no possibility of deontological inaction. A robot's decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1–2b). This pattern repeated in a large‐scale web survey (Study 3, N = ∼19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans’ and robots’ decisions were largest for norm‐violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
ISSN:0046-2772
1099-0992
DOI:10.1002/ejsp.2936