Comparison of techniques for matching of usability problem descriptions

Matching of usability problem descriptions consists of determining which problem descriptions are similar and which are not. In most comparisons of evaluation methods matching helps determine the overlap among methods and among evaluators. However, matching has received scant attention in usability...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Interacting with computers 2008-12, Vol.20 (6), p.505-514
Hauptverfasser: Hornbaek, Kasper, Frokjaer, Erik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Matching of usability problem descriptions consists of determining which problem descriptions are similar and which are not. In most comparisons of evaluation methods matching helps determine the overlap among methods and among evaluators. However, matching has received scant attention in usability research and may be fundamentally unreliable. We compare how 52 novice evaluators match the same set of problem descriptions from three think aloud studies. For matching the problem descriptions the evaluators use either (a) the similarity of solutions to the problems, (b) a prioritization effort for the owner of the application tested, (c) a model proposed by Lavery and colleagues [Lavery, D., Cockton, G., Atkinson, M.P., 1997. Comparison of evaluation methods using structured usability problem reports. Behaviour and Information Technology, 16 (4/5), 246–266], or (d) the User Action Framework [Andre, T.S., Hartson, H.R., Belz, S.M., McCreary, F.A., 2001. The user action framework: a reliable foundation for usability engineering support tools. International Journal of Human–Computer Studies, 54 (1), 107–136]. The resulting matches are different, both with respect to the number of problems grouped or identified as unique, and with respect to the content of the problem descriptions that were matched. Evaluators report different concerns and foci of attention when using the techniques. We illustrate how these differences among techniques might adversely influence the reliability of findings in usability research, and discuss some remedies.
ISSN:0953-5438
1873-7951
DOI:10.1016/j.intcom.2008.08.005