An Audit Framework for Technical Assessment of Binary Classifiers
Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission's proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of su...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multilevel models using logistic regression (MLogRM) and random forest models
(RFM) are increasingly deployed in industry for the purpose of binary
classification. The European Commission's proposed Artificial Intelligence Act
(AIA) necessitates, under certain conditions, that application of such models
is fair, transparent, and ethical, which consequently implies technical
assessment of these models. This paper proposes and demonstrates an audit
framework for technical assessment of RFMs and MLogRMs by focussing on model-,
discrimination-, and transparency & explainability-related aspects. To measure
these aspects 20 KPIs are proposed, which are paired to a traffic light risk
assessment method. An open-source dataset is used to train a RFM and a MLogRM
model and these KPIs are computed and compared with the traffic lights. The
performance of popular explainability methods such as kernel- and tree-SHAP are
assessed. The framework is expected to assist regulatory bodies in performing
conformity assessments of binary classifiers and also benefits providers and
users deploying such AI-systems to comply with the AIA. |
---|---|
DOI: | 10.48550/arxiv.2211.09500 |