Linking simulation argument to the AI risk

Metaphysics, future studies, and artificial intelligence (AI) are usually regarded as rather distant, non-intersecting fields. There are, however, interesting points of contact which might highlight some potentially risky aspects of advanced computing technologies. While the original simulation argu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Futures : the journal of policy, planning and futures studies planning and futures studies, 2015-09, Vol.72, p.27-31
1. Verfasser: Cirkovic, Milan M
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Metaphysics, future studies, and artificial intelligence (AI) are usually regarded as rather distant, non-intersecting fields. There are, however, interesting points of contact which might highlight some potentially risky aspects of advanced computing technologies. While the original simulation argument of Nick Bostrom was formulated without reference to the enabling AI technologies and accompanying existential risks, I argue that there is an important generic link between the two, whose net effect under a range of plausible scenarios is to reduce the likelihood of our living in a simulation. This has several consequences for risk analysis and risk management, the most important being putting greater priority on confronting “traditional” existential risks, such as those following from the misuse of biotechnology, nuclear winter or supervolcanism. In addition, the present argument demonstrates how – rather counterintuitively – seemingly speculative ontological speculations could, in principle, influence practical decisions on risk mitigation policies.
ISSN:0016-3287
1873-6378
DOI:10.1016/j.futures.2015.05.003