Position: A taxonomy for reporting and describing AI security incidents
AI systems are vulnerable to attacks, and corresponding AI security incidents have been described. Although a collection of safety incidents around AI will become a regulatory requirement, there is no proposal to collect AI security incidents. In this position paper, we argue that a proposal should...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AI systems are vulnerable to attacks, and corresponding AI security incidents
have been described. Although a collection of safety incidents around AI will
become a regulatory requirement, there is no proposal to collect AI security
incidents. In this position paper, we argue that a proposal should be made,
taking into account the interests and needs of different stakeholders:
industry, providers, users, and researchers. We thus attempt to close this gap
and propose a taxonomy alongside its requirements like machine readability and
link-ability with existing databases. We aim to spark discussions and enable
discussion of which information is feasible, necessary, and possible to report
and share within and outside organizations using AI. |
---|---|
DOI: | 10.48550/arxiv.2412.14855 |