Safety Assurance of Artificial Intelligence-Based Systems: A Systematic Literature Review on the State of the Art and Guidelines for Future Work
The objective of this research is to present the state of the art of the safety assurance of Artificial Intelligence (AI)-based systems and guidelines on future correlated work. For this purpose, a Systematic Literature Review comprising 5090 peer-reviewed references relating safety to AI has been c...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.130733-130770 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The objective of this research is to present the state of the art of the safety assurance of Artificial Intelligence (AI)-based systems and guidelines on future correlated work. For this purpose, a Systematic Literature Review comprising 5090 peer-reviewed references relating safety to AI has been carried out, with focus on a 329-reference subset in which the safety assurance of AI-based systems is directly conveyed. From 2016 onwards, the safety assurance of AI-based systems has experienced significant effervescence and leaned towards five main approaches: performing black-box testing, using safety envelopes, designing fail-safe AI, combining white-box analyses with explainable AI, and establishing a safety assurance process throughout systems' lifecycles. Each of these approaches has been discussed in this paper, along with their features, pros and cons. Finally, guidelines for future research topics have also been presented. They result from an analysis based on both the cross-fertilization among the reviewed references and the authors' experience with safety and AI. Among 15 research themes, these guidelines reinforce the need for deepening guidelines for the safety assurance of AI-based systems by, e.g., analyzing datasets from a safety perspective, designing explainable AI, setting and justifying AI hyperparameters, and assuring the safety of hardware-implemented AI-based systems. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3229233 |