Exponential Contingency Explosion: Implications for Artificial General Intelligence
The failure of complex artificial intelligence (AI) systems seems ubiquitous. To provide a model to describe these shortcomings, we define complexity in terms of a system's sensors and the number of environments or situations in which it performs. The complexity is not looked at in terms of the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on systems, man, and cybernetics. Systems man, and cybernetics. Systems, 2022-05, Vol.52 (5), p.2800-2808 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The failure of complex artificial intelligence (AI) systems seems ubiquitous. To provide a model to describe these shortcomings, we define complexity in terms of a system's sensors and the number of environments or situations in which it performs. The complexity is not looked at in terms of the difficulty of design, but in the final performance of the system as a function of the sensor and environmental count. As the complexity of AI, or any system, increases linearly the contingencies increase exponentially and the number of possible design performances increases as a compound exponential. In this worst case scenario, the exponential increase in contingencies makes the assessment of all contingencies difficult and eventually impossible. As the contingencies grow large, unexpected and undesirable contingencies are all expected to increase in number. This, the worst case scenario, is highly connected, or conjunctive. Contingencies grow linearly with respect to complexity for systems loosely connected, or disjunctive. Mitigation of unexpected outcomes in either case can be accomplished using tools such as design expertise and iterative redesign informed by intelligent testing. |
---|---|
ISSN: | 2168-2216 2168-2232 |
DOI: | 10.1109/TSMC.2021.3056669 |