Evaluating the Effectiveness of a Joint Cognitive System: Metrics, Techniques, and Frameworks

An implication of Cognitive Systems Engineering is that joint cognitive systems (JCS; also known as complex socio-technical systems) need to be evaluated for its effectiveness in performing the complex cognitive work requirements. This requires using measures that go well beyond “typical” performanc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2006, Vol.50 (3), p.314-318
Hauptverfasser: Potter, Scott S., Woods, David D., Roth, Emilie M., Fowlkes, Jennifer, Hoffman, Robert R.
Format: Review
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:An implication of Cognitive Systems Engineering is that joint cognitive systems (JCS; also known as complex socio-technical systems) need to be evaluated for its effectiveness in performing the complex cognitive work requirements. This requires using measures that go well beyond “typical” performance metrics such as the number of subtask goals achieved per person per unit of time and the corresponding simple baseline comparisons or workload assessment metrics. This JCS perspective implies that the system must be designed and evaluated from the perspective of the shift in role of the human supervisor. This imposes new types of requirements on the human operator. Previous research in CSE and our own experience has lead us to identify a set of generic JCS support requirements that apply to cognitive work by any cognitive agent or any set of cognitive agents, including teams of people and machine agents. Metrics will have to reflect such phenomena as “teamwork” or “resilience” of a JCS. This places new burdens on evaluation techniques and frameworks, since metrics should be generated from a principled approach and based on fundamental principles of interest to the designers of the JCS. An implication of the JCS perspective is that complex and cognitive systems need to be evaluated for usability, usefulness, and understandability; each of which goes well beyond raw performance. However, conceptually-grounded evaluation frameworks, corresponding operational techniques, and corresponding measures for these are limited. Therefore, in order to advance the state of the field, we have gathered a set of researchers and practitioners to present recent evaluation work to stimulate discussion.
ISSN:1541-9312
1071-1813
2169-5067
DOI:10.1177/154193120605000322