Quantitative AI Risk Assessments: Opportunities and Challenges
Although AI systems are increasingly being leveraged to provide value to organizations, individuals, and society, significant attendant risks have been identified and have manifested. These risks have led to proposed regulations, litigation, and general societal concerns. As with any promising techn...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although AI systems are increasingly being leveraged to provide value to
organizations, individuals, and society, significant attendant risks have been
identified and have manifested. These risks have led to proposed regulations,
litigation, and general societal concerns.
As with any promising technology, organizations want to benefit from the
positive capabilities of AI technology while reducing the risks. The best way
to reduce risks is to implement comprehensive AI lifecycle governance where
policies and procedures are described and enforced during the design,
development, deployment, and monitoring of an AI system. Although support for
comprehensive governance is beginning to emerge, organizations often need to
identify the risks of deploying an already-built model without knowledge of how
it was constructed or access to its original developers. Such an assessment
will quantitatively assess the risks of an existing model in a manner analogous
to how a home inspector might assess the risks of an already-built home or a
physician might assess overall patient health based on a battery of tests.
Several AI risks can be quantified using metrics from the technical
community. However, there are numerous issues in deciding how these metrics can
be leveraged to create a quantitative AI risk assessment. This paper explores
these issues, focusing on the opportunities, challenges, and potential impacts
of such an approach, and discussing how it might influence AI regulations. |
---|---|
DOI: | 10.48550/arxiv.2209.06317 |