Formative vs. Reflective Measurement: Comment on Marakas, Johnson, and Clay (2007)/Formative vs. Reflective Measurement: A Reply to Hardin, Chang, and Fuller
In a recent issue of the Journal of the Association for Information Systems, Marakas, Johnson, and Clay (2007) presented an interesting and important discussion on formative versus reflective measurement, specifically related to the measurement of the computer self-efficacy (CSE) construct. However,...
Gespeichert in:
Veröffentlicht in: | Journal of the Association for Information Systems 2008-09, Vol.9 (9), p.519 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In a recent issue of the Journal of the Association for Information Systems, Marakas, Johnson, and Clay (2007) presented an interesting and important discussion on formative versus reflective measurement, specifically related to the measurement of the computer self-efficacy (CSE) construct. However, we believe their recommendation to measure CSE constructs using formative indicators merits additional dialogue before being adopted by researchers. In the current study we discuss why the substantive theory underlying the CSE construct suggests that it is best measured using reflective indicators. We then provide empirical evidence demonstrating how the misspecification of existing CSE measures as formative can result in unstable estimates across varying endogenous variables and research contexts. Specifically, we demonstrate how formative indicator weights are dependent on the endogenous variable used to estimate them. Given that the strength of formative indicator weights is one metric used for determining indicator retention, and adding or dropping formative indicators can result in changes in the conceptual meaning of a construct, the use of formative measurement can result in the retention of different indicators and ultimately the measurement of different concepts across studies. As a result, the comparison of findings across studies over time becomes conceptually problematic and compromises our ability to replicate and extend research in a particular domain. We discuss not only the consequences of using formative versus reflective measures in CSE research but also the broader implications this choice has on research in other domains. [PUBLICATION ABSTRACT] |
---|---|
ISSN: | 1536-9323 1536-9323 |