Usage Governance Advisor: from Intent to AI Governance
Evaluating the safety of AI Systems is a pressing concern for organizations deploying them. In addition to the societal damage done by the lack of fairness of those systems, deployers are concerned about the legal repercussions and the reputational damage incurred by the use of models that are unsaf...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Evaluating the safety of AI Systems is a pressing concern for organizations
deploying them. In addition to the societal damage done by the lack of fairness
of those systems, deployers are concerned about the legal repercussions and the
reputational damage incurred by the use of models that are unsafe. Safety
covers both what a model does; e.g., can it be used to reveal personal
information from its training set, and how a model was built; e.g., was it only
trained on licensed data sets. Determining the safety of an AI system requires
gathering information from a wide set of heterogeneous sources including safety
benchmarks and technical documentation for the set of models used in that
system. In addition, responsible use is encouraged through mechanisms that
advise and help the user to take mitigating actions where safety risks are
detected. We present Usage Governance Advisor which creates semi-structured
governance information, identifies and prioritizes risks according to the
intended use case, recommends appropriate benchmarks and risk assessments and
importantly proposes mitigation strategies and actions. |
---|---|
DOI: | 10.48550/arxiv.2412.01957 |