Unraveling ML Models of Emotion With NOVA: Multi-Level Explainable AI for Non-Experts

In this article, we introduce a next-generation annotation tool called NOVA for emotional behaviour analysis, which implements a workflow that interactively incorporates the 'human in the loop'. A main aspect of NOVA is the possibility of applying semi-supervised active learning where Mach...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2022-07, Vol.13 (3), p.1155-1167
Hauptverfasser: Heimerl, Alexander, Weitz, Katharina, Baur, Tobias, Andre, Elisabeth
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this article, we introduce a next-generation annotation tool called NOVA for emotional behaviour analysis, which implements a workflow that interactively incorporates the 'human in the loop'. A main aspect of NOVA is the possibility of applying semi-supervised active learning where Machine Learning techniques are used already during the annotation process by giving the possibility to pre-label data automatically. Furthermore, NOVA implements recent eXplainable AI (XAI) techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanations. We investigate how such techniques can assist non-experts in terms of trust, perceived self-efficacy, cognitive workload as well as creating correct mental models about the system by conducting a user study with 53 participants. The results show that NOVA can easily be used by non-experts and lead to a high computer self-efficacy. Furthermore, the results indicate that XAI visualisations help users to create more correct mental models about the machine learning system compared to the baseline condition. Nevertheless, we suggest that explanations in the field of AI have to be more focused on user-needs as well as on the classification task and the model they want to explain.
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2020.3043603