Who sees the most? Differences in students’ and educational research experts’ first impressions of classroom instruction

In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Social psychology of education 2020-07, Vol.23 (3), p.673-699
Hauptverfasser: Begrich, Lukas, Fauth, Benjamin, Kunter, Mareike
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure to obtain ratings of instructional quality along the dimensions of cognitive activation, classroom management, and constructive support based on only 30 s of classroom observations. Ratings were compared to the longitudinal data of students taught in the videos to investigate the connections between the brief glimpses into instructional quality and student learning. In addition, we included samples of raters with different backgrounds (university students, middle school students and educational research experts) to understand the differences in thin slice ratings with respect to their predictive power regarding student learning. Results suggest that each group provides reliable ratings, as measured by a high degree of agreement between raters, as well predictive ratings with respect to students’ learning. Furthermore, we find experts’ and middle school students’ ratings of classroom management and constructive support, respectively, explain unique components of variance in student test scores. This incremental validity can be explained with the amount of implicit knowledge (experts) and an attunement to assess specific cues that is attributable to an emotional involvement (students).
ISSN:1381-2890
1573-1928
DOI:10.1007/s11218-020-09554-2