Collective Intelligence as Infrastructure for Reducing Broad Global Catastrophic Risks
Academic and philanthropic communities have grown increasingly concerned with global catastrophic risks (GCRs), including artificial intelligence safety, pandemics, biosecurity, and nuclear war. Outcomes of many, if not all, risk situations hinge on the performance of human groups, such as whether g...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-09 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Academic and philanthropic communities have grown increasingly concerned with global catastrophic risks (GCRs), including artificial intelligence safety, pandemics, biosecurity, and nuclear war. Outcomes of many, if not all, risk situations hinge on the performance of human groups, such as whether governments or scientific communities can work effectively. We propose to think about these issues as Collective Intelligence (CI) problems -- of how to process distributed information effectively. CI is a transdisciplinary research area, whose application involves human and animal groups, markets, robotic swarms, collections of neurons, and other distributed systems. In this article, we argue that improving CI in human groups can improve general resilience against a wide variety of risks. We summarize findings from the CI literature on conditions that improve human group performance, and discuss ways existing CI findings may be applied to GCR mitigation. We also suggest several directions for future research at the exciting intersection of these two emerging fields. |
---|---|
ISSN: | 2331-8422 |
DOI: | 10.48550/arxiv.2205.03300 |