Meaningful XAI Based on User-Centric Design Methodology
This report first takes stock of XAI-related requirements appearing in various EU directives, regulations, guidelines, and CJEU case law. This analysis of existing requirements will permit us to have a clearer vision of the purposes, the ``why'', of XAI, which we separate into five categor...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This report first takes stock of XAI-related requirements appearing in
various EU directives, regulations, guidelines, and CJEU case law. This
analysis of existing requirements will permit us to have a clearer vision of
the purposes, the ``why'', of XAI, which we separate into five categories:
contestability, empowerment/redressing information asymmetries, control over
system performance, evaluation of algorithmic decisions, and public
administration transparency. The analysis of legal requirements also permits us
to create four categories of recipients for explainability: data science teams;
human operators of the system; persons affected by algorithmic decisions, and
regulators/judges/auditors. Lastly, we identify four main operational contexts
for explainability: XAI for the upstream design and testing phase; XAI for
human-on-the-loop control; XAI for human-in-the-loop control; and XAI for
ex-post challenges and investigations.Second, we will present user-centered
design methodology, which takes the purposes, the recipients and the
operational context into account in order to develop optimal XAI
solutions.Third, we will suggest a methodology to permit suppliers and users of
high-risk AI applications to propose local XAI solutions that are effective in
the sense of being ``meaningful'', for example, useful in light of the
operational, safety and fundamental rights contexts. The process used to
develop these ``meaningful'' XAI solutions will be based on user-centric design
principles examined in the second part.Fourth, we will suggest that the
European Commission issue guidelines to provide a harmonised approach to
defining ``meaningful'' explanations based on the purposes, audiences and
operational contexts of AI systems. These guidelines would apply to the AI Act,
but also to the other EU texts requiring explanations for algorithmic systems
and results. |
---|---|
DOI: | 10.48550/arxiv.2308.13228 |