Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration

In item response theory applications, item fit analysis is often performed for precalibrated items using response data from subsequent test administrations. Because such practices lead to the involvement of sampling variability from two distinct samples that must be properly addressed for statistica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of educational and behavioral statistics 2024-09
Hauptverfasser: Han, Youngjin, Yang, Ji Seung, Liu, Yang
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title Journal of educational and behavioral statistics
container_volume
creator Han, Youngjin
Yang, Ji Seung
Liu, Yang
description In item response theory applications, item fit analysis is often performed for precalibrated items using response data from subsequent test administrations. Because such practices lead to the involvement of sampling variability from two distinct samples that must be properly addressed for statistical inferences, conventional item fit analysis can be revisited and modified. This study extends the item fit analysis originally proposed by Haberman et al., which involves examining the discrepancy between the model-implied and empirical expected score curve. We analytically derive the standard errors that accurately account for the sampling variability from two samples within the framework of restricted recalibration. After derivation, we present the findings from a simulation study that evaluates the performance of our proposed method in terms of the empirical Type I error rate and power, for both dichotomous and polytomous items. An empirical example is also provided, in which we assess the item fit of pediatric short-form scale in the Patient-Reported Outcome Measurement Information System.
doi_str_mv 10.3102/10769986241268604
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_3102_10769986241268604</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_3102_10769986241268604</sourcerecordid><originalsourceid>FETCH-LOGICAL-c127t-b30366ce2234d0c7ed25edbafdca7c2bcb3cfdcfe6de7b46c778ddc8d5b0c3c03</originalsourceid><addsrcrecordid>eNplkMtOwzAURC0EEqXwAez8AwG_YifLKmqhUiSkQtaRfX2DjNqksg2Cv6cPdqxmRmc0iyHknrMHyZl45Mzouq60UFzoSjN1QWa8lmXBWakuD_7Ai2Phmtyk9MEYl0LJGWkXKWFKYXyn64w7ugqZdqe4_N4jZPT0FaaItPmMX0i70WOkG0w5hhPcINhtcNHmMI235Gqw24R3fzon3Wr51jwX7cvTulm0BXBhcuEkk1oDCiGVZ2DQixK9s4MHa0A4cBIOfkDt0TilwZjKe6h86RhIYHJO-HkX4pRSxKHfx7Cz8afnrD_e0f-7Q_4CIdVUbA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration</title><source>Access via SAGE</source><source>Alma/SFX Local Collection</source><creator>Han, Youngjin ; Yang, Ji Seung ; Liu, Yang</creator><creatorcontrib>Han, Youngjin ; Yang, Ji Seung ; Liu, Yang</creatorcontrib><description>In item response theory applications, item fit analysis is often performed for precalibrated items using response data from subsequent test administrations. Because such practices lead to the involvement of sampling variability from two distinct samples that must be properly addressed for statistical inferences, conventional item fit analysis can be revisited and modified. This study extends the item fit analysis originally proposed by Haberman et al., which involves examining the discrepancy between the model-implied and empirical expected score curve. We analytically derive the standard errors that accurately account for the sampling variability from two samples within the framework of restricted recalibration. After derivation, we present the findings from a simulation study that evaluates the performance of our proposed method in terms of the empirical Type I error rate and power, for both dichotomous and polytomous items. An empirical example is also provided, in which we assess the item fit of pediatric short-form scale in the Patient-Reported Outcome Measurement Information System.</description><identifier>ISSN: 1076-9986</identifier><identifier>EISSN: 1935-1054</identifier><identifier>DOI: 10.3102/10769986241268604</identifier><language>eng</language><ispartof>Journal of educational and behavioral statistics, 2024-09</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c127t-b30366ce2234d0c7ed25edbafdca7c2bcb3cfdcfe6de7b46c778ddc8d5b0c3c03</cites><orcidid>0000-0002-0780-6272</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Han, Youngjin</creatorcontrib><creatorcontrib>Yang, Ji Seung</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><title>Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration</title><title>Journal of educational and behavioral statistics</title><description>In item response theory applications, item fit analysis is often performed for precalibrated items using response data from subsequent test administrations. Because such practices lead to the involvement of sampling variability from two distinct samples that must be properly addressed for statistical inferences, conventional item fit analysis can be revisited and modified. This study extends the item fit analysis originally proposed by Haberman et al., which involves examining the discrepancy between the model-implied and empirical expected score curve. We analytically derive the standard errors that accurately account for the sampling variability from two samples within the framework of restricted recalibration. After derivation, we present the findings from a simulation study that evaluates the performance of our proposed method in terms of the empirical Type I error rate and power, for both dichotomous and polytomous items. An empirical example is also provided, in which we assess the item fit of pediatric short-form scale in the Patient-Reported Outcome Measurement Information System.</description><issn>1076-9986</issn><issn>1935-1054</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNplkMtOwzAURC0EEqXwAez8AwG_YifLKmqhUiSkQtaRfX2DjNqksg2Cv6cPdqxmRmc0iyHknrMHyZl45Mzouq60UFzoSjN1QWa8lmXBWakuD_7Ai2Phmtyk9MEYl0LJGWkXKWFKYXyn64w7ugqZdqe4_N4jZPT0FaaItPmMX0i70WOkG0w5hhPcINhtcNHmMI235Gqw24R3fzon3Wr51jwX7cvTulm0BXBhcuEkk1oDCiGVZ2DQixK9s4MHa0A4cBIOfkDt0TilwZjKe6h86RhIYHJO-HkX4pRSxKHfx7Cz8afnrD_e0f-7Q_4CIdVUbA</recordid><startdate>20240904</startdate><enddate>20240904</enddate><creator>Han, Youngjin</creator><creator>Yang, Ji Seung</creator><creator>Liu, Yang</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0780-6272</orcidid></search><sort><creationdate>20240904</creationdate><title>Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration</title><author>Han, Youngjin ; Yang, Ji Seung ; Liu, Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c127t-b30366ce2234d0c7ed25edbafdca7c2bcb3cfdcfe6de7b46c778ddc8d5b0c3c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Han, Youngjin</creatorcontrib><creatorcontrib>Yang, Ji Seung</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of educational and behavioral statistics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Han, Youngjin</au><au>Yang, Ji Seung</au><au>Liu, Yang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration</atitle><jtitle>Journal of educational and behavioral statistics</jtitle><date>2024-09-04</date><risdate>2024</risdate><issn>1076-9986</issn><eissn>1935-1054</eissn><abstract>In item response theory applications, item fit analysis is often performed for precalibrated items using response data from subsequent test administrations. Because such practices lead to the involvement of sampling variability from two distinct samples that must be properly addressed for statistical inferences, conventional item fit analysis can be revisited and modified. This study extends the item fit analysis originally proposed by Haberman et al., which involves examining the discrepancy between the model-implied and empirical expected score curve. We analytically derive the standard errors that accurately account for the sampling variability from two samples within the framework of restricted recalibration. After derivation, we present the findings from a simulation study that evaluates the performance of our proposed method in terms of the empirical Type I error rate and power, for both dichotomous and polytomous items. An empirical example is also provided, in which we assess the item fit of pediatric short-form scale in the Patient-Reported Outcome Measurement Information System.</abstract><doi>10.3102/10769986241268604</doi><orcidid>https://orcid.org/0000-0002-0780-6272</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1076-9986
ispartof Journal of educational and behavioral statistics, 2024-09
issn 1076-9986
1935-1054
language eng
recordid cdi_crossref_primary_10_3102_10769986241268604
source Access via SAGE; Alma/SFX Local Collection
title Assessing Item Fit Using Expected Score Curve Under Restricted Recalibration
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T08%3A00%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Assessing%20Item%20Fit%20Using%20Expected%20Score%20Curve%20Under%20Restricted%20Recalibration&rft.jtitle=Journal%20of%20educational%20and%20behavioral%20statistics&rft.au=Han,%20Youngjin&rft.date=2024-09-04&rft.issn=1076-9986&rft.eissn=1935-1054&rft_id=info:doi/10.3102/10769986241268604&rft_dat=%3Ccrossref%3E10_3102_10769986241268604%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true