Symbiosis, not alignment, as the goal for liberal democracies in the transition to artificial general intelligence
A transition to a world with artificial general intelligence (AGI) may occur within the next few decades. This transition may give rise to catastrophic risks from misaligned AGI, which have received a significant amount of attention, deservedly. Here I argue that AGI systems that are intent-aligned...
Gespeichert in:
Veröffentlicht in: | Ai and ethics (Online) 2024-05, Vol.4 (2), p.315-324 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A transition to a world with artificial general intelligence (AGI) may occur within the next few decades. This transition may give rise to catastrophic risks from
misaligned
AGI, which have received a significant amount of attention, deservedly. Here I argue that AGI systems that are
intent-aligned
—they always try to do what their operators want them to do—would also create catastrophic risks, mainly due to the power that they concentrate on their operators. With time, that power would almost certainly be catastrophically exploited, potentially resulting in human extinction or permanent dystopia. I suggest that liberal democracies, if they decide to allow the development of AGI, may react to this threat by letting AGI take shape as an
intergenerational social project
, resulting in an arrangement where AGI is not intent-aligned but
symbiotic
with humans. I provide some tentative ideas on what the resulting arrangement may look like and consider what speaks for and what against aiming for intent-aligned AGI as an intermediate step. |
---|---|
ISSN: | 2730-5953 2730-5961 |
DOI: | 10.1007/s43681-023-00268-7 |