EvAlignUX: Advancing UX Research through LLM-Supported Exploration of Evaluation Metrics
Evaluating UX in the context of AI's complexity, unpredictability, and generative nature presents unique challenges. HCI scholars lack sufficient tool support to build knowledge around diverse evaluation metrics and develop comprehensive UX evaluation plans. In this paper, we introduce EvAlignU...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Evaluating UX in the context of AI's complexity, unpredictability, and
generative nature presents unique challenges. HCI scholars lack sufficient tool
support to build knowledge around diverse evaluation metrics and develop
comprehensive UX evaluation plans. In this paper, we introduce EvAlignUX, an
innovative system grounded in scientific literature and powered by large
language models (LLMs), designed to help HCI scholars explore evaluation
metrics and their relationship to potential research outcomes. A user study
involving 19 HCI scholars revealed that EvAlignUX significantly improved the
perceived clarity, specificity, feasibility, and overall quality of their
evaluation proposals. The use of EvAlignUX enhanced participants' thought
processes, resulting in the creation of a Question Bank that can be used to
guide UX Evaluation Development. Additionally, the influence of researchers'
backgrounds on their perceived inspiration and concerns about over-reliance on
AI highlights future research directions for AI's role in fostering critical
thinking. |
---|---|
DOI: | 10.48550/arxiv.2409.15471 |