Integration and generalization of kappas for multiple raters

J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Psychological bulletin 1980-09, Vol.88 (2), p.322-328
1. Verfasser: Conger, Anthony J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all pairs of raters. It has been argued that simultaneous agreement among all raters could provide an alternative basis for measuring multiple-rater agreement; however, agreement among raters can actually be considered to be an arbitrary choice along a continuum ranging from agreement for a pair of raters to agreement among all raters. Using this generalized concept of g -wise agreement, multiple-rater kappas are extended, interrelated, and illustrated. (4 ref)
ISSN:0033-2909
1939-1455
DOI:10.1037/0033-2909.88.2.322