A Study on Detecting of Differential Item Functioning of PISA 2006 Science Literacy Items in Turkish and American Samples
Problem Statement: Item bias occurs when individuals from different groups (different gender, cultural background, etc.) have different probabilities of responding correctly to a test item despite having the same skill levels. It is important that tests or items do not have bias in order to ensure t...
Gespeichert in:
Veröffentlicht in: | Eurasian journal of educational research 2015-01, Vol.15 (58), p.41 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Problem Statement: Item bias occurs when individuals from different groups (different gender, cultural background, etc.) have different probabilities of responding correctly to a test item despite having the same skill levels. It is important that tests or items do not have bias in order to ensure the accuracy of decisions taken according to test scores. Thus, items should be tested for bias during the process of test development and adaptation. Items used in testing programs, such as the Program for International Student Assessment (PISA) study, whose results are inform educational policies throughout the participating countries, should be reviewed for bias. The study examines whether items of the 2006 PISA science literacy test, applied in Turkey, show bias. Purpose of the Study: The aim of this study is to analyze the measurement equality of the PISA science literacy test of 2006 in Turkish and American groups in terms of structural invariance and also determined whether the science literacy items show inter-cultural bias. Methods: The study included data for 15 year-old 757 Turkish and 856 American students. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) was performed to determine whether the PISA science literacy test was equivalent in measurement construct in both groups; multi group confirmatory factor analysis (MCFA) was used to identify differences in the factor structure according to cultures. Item bias was detected via the Mantel-Haenszel (MH), Simultaneous Item Bias Test (SIBTEST) and Item Response Theory Likelihood- Ratio Analysis (IRT-LR) procedures. Findings and Results: : According to the MCFA results PISA 2006 science literacy test for both Turkish and American groups showed equivalent measurement construct. Moreover, the three analyses methods agreed at B and C levels for 15 items in the Turkish sample and 25 items in the American sample in terms of DIF. According to expert opinions, common sources for item bias were: familiarity with item content and differing skill levels between cultures. Conclusions and Recommendations: The 38 items that showed DIF by each of the three methods were accepted as having DIF. The findings of the present study, possible source of bias in the items will not change the average level of student performance in participating countries. However, it will be beneficial that the review of item content before test administration, in order to reduce the errors items with DIF across different |
---|---|
ISSN: | 1302-597X |
DOI: | 10.14689/ejer.2015.58.3 |