A Semi-Automated Usability Evaluation Framework for Interactive Image Segmentation Systems
For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For complex segmentation tasks, the achievable accuracy of fully automated
systems is inherently limited. Specifically, when a precise segmentation result
is desired for a small amount of given data sets, semi-automatic methods
exhibit a clear benefit for the user. The optimization of human computer
interaction (HCI) is an essential part of interactive image segmentation.
Nevertheless, publications introducing novel interactive segmentation systems
(ISS) often lack an objective comparison of HCI aspects. It is demonstrated,
that even when the underlying segmentation algorithm is the same throughout
interactive prototypes, their user experience may vary substantially. As a
result, users prefer simple interfaces as well as a considerable degree of
freedom to control each iterative step of the segmentation. In this article, an
objective method for the comparison of ISS is proposed, based on extensive user
studies. A summative qualitative content analysis is conducted via abstraction
of visual and verbal feedback given by the participants. A direct assessment of
the segmentation system is executed by the users via the system usability scale
(SUS) and AttrakDiff-2 questionnaires. Furthermore, an approximation of the
findings regarding usability aspects in those studies is introduced, conducted
solely from the system-measurable user actions during their usage of
interactive segmentation prototypes. The prediction of all questionnaire
results has an average relative error of 8.9%, which is close to the expected
precision of the questionnaire results themselves. This automated evaluation
scheme may significantly reduce the resources necessary to investigate each
variation of a prototype's user interface (UI) features and segmentation
methodologies. |
---|---|
DOI: | 10.48550/arxiv.1909.00482 |