Attributions toward Artificial Agents in a modified Moral Turing Test
Scientific Reports (2024) Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen and colleagues' (2000) prop...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scientific Reports (2024) Advances in artificial intelligence (AI) raise important questions about
whether people view moral evaluations by AI systems similarly to
human-generated moral evaluations. We conducted a modified Moral Turing Test
(m-MTT), inspired by Allen and colleagues' (2000) proposal, by asking people to
distinguish real human moral evaluations from those made by a popular advanced
AI language model: GPT-4. A representative sample of 299 U.S. adults first
rated the quality of moral evaluations when blinded to their source.
Remarkably, they rated the AI's moral reasoning as superior in quality to
humans' along almost all dimensions, including virtuousness, intelligence, and
trustworthiness, consistent with passing what Allen and colleagues call the
comparative MTT. Next, when tasked with identifying the source of each
evaluation (human or computer), people performed significantly above chance
levels. Although the AI did not pass this test, this was not because of its
inferior moral reasoning but, potentially, its perceived superiority, among
other possible explanations. The emergence of language models capable of
producing moral responses perceived as superior in quality to humans' raises
concerns that people may uncritically accept potentially harmful moral guidance
from AI. This possibility highlights the need for safeguards around generative
language models in matters of morality. |
---|---|
DOI: | 10.48550/arxiv.2406.11854 |