Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition
Recognizing daily activities with unobtrusive sensors in smart environments enables various healthcare applications. Monitoring how subjects perform activities at home and their changes over time can reveal early symptoms of health issues, such as cognitive decline. Most approaches in this field use...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recognizing daily activities with unobtrusive sensors in smart environments
enables various healthcare applications. Monitoring how subjects perform
activities at home and their changes over time can reveal early symptoms of
health issues, such as cognitive decline. Most approaches in this field use
deep learning models, which are often seen as black boxes mapping sensor data
to activities. However, non-expert users like clinicians need to trust and
understand these models' outputs. Thus, eXplainable AI (XAI) methods for Human
Activity Recognition have emerged to provide intuitive natural language
explanations from these models. Different XAI methods generate different
explanations, and their effectiveness is typically evaluated through user
surveys, that are often challenging in terms of costs and fairness. This paper
proposes an automatic evaluation method using Large Language Models (LLMs) to
identify, in a pool of candidates, the best XAI approach for non-expert users.
Our preliminary results suggest that LLM evaluation aligns with user surveys. |
---|---|
DOI: | 10.48550/arxiv.2408.06352 |