An Empirical Study of Computerized Adaptive Test Administration Conditions
This empirical study was designed to determine the impact of computerized adaptive test (CAT) administration formats on student performance. Students in medical technology programs took a paper-and-pencil and an individualized, computerized adaptive test. Students were randomly assigned to adaptive...
Gespeichert in:
Veröffentlicht in: | Journal of educational measurement 1994-09, Vol.31 (3), p.251-263 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 263 |
---|---|
container_issue | 3 |
container_start_page | 251 |
container_title | Journal of educational measurement |
container_volume | 31 |
creator | Lunz, Mary E. Bergstrom, Betty A. |
description | This empirical study was designed to determine the impact of computerized adaptive test (CAT) administration formats on student performance. Students in medical technology programs took a paper-and-pencil and an individualized, computerized adaptive test. Students were randomly assigned to adaptive test administration formats to ascertain the effect on student performance of altering: (a) the difficulty of the first item, (b) the targeted level of test difficulty, (c) minimum test length, and (d) the opportunity to control the test. Computerized adaptive test data were analyzed with ANCOVA. The paper-and-pencil test was used as a covariate to equalize ability variance among cells. The only significant main effect was for opportunity to control the test. There were no significant interactions among test administration formats. This study provides evidence concerning adjusting traditional computerized adaptive testing to more familiar testing modalities. |
doi_str_mv | 10.1111/j.1745-3984.1994.tb00446.x |
format | Article |
fullrecord | <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_journals_1295174592</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ericid>EJ497069</ericid><jstor_id>1435269</jstor_id><sourcerecordid>1435269</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4271-f1d5181ac386202ed2c719f20ac5a591b27e1e98d8042acb04f75cfd87f2e3fd3</originalsourceid><addsrcrecordid>eNqVkMtO4zAYhS00SHSAN2ARMetkbMeO49mMqlLKlAILbkvL9UVypk2CnUI7Tz8OQWWNN7Z1zn-O_QFwjmCG4vpZZYgRmua8JBninGTdEkJCimx7AEZ76RsYQYhxCgtKj8D3ECoIEWUUjcB8XCfTdeu8U3KV3HcbvUsam0yadbvpjHf_jE7GWradezXJgwldvK1d7ULnZeeaOjpr7fpTOAGHVq6COf3Yj8Hj5fRhcpUu7mZ_JuNFqghmKLVIU1QiqfKywBAbjRVD3GIoFZWUoyVmBhle6hISLNUSEsuosrpkFpvc6vwY_BhyW9-8bOKTRNVsfB0rBcKc9r_mOLp-DS7lmxC8saL1bi39TiAoenaiEr1V9IBEz058sBPbOHw2DEcCaj84nRPOYMGj_HuQ39zK7L4QLObTixtM0WdBFbrGfyaQnOL3gnSQI2ez3cvS_xUFyxkVz7czsZiVT_ezq2txmf8H9DiYxw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1295174592</pqid></control><display><type>article</type><title>An Empirical Study of Computerized Adaptive Test Administration Conditions</title><source>Periodicals Index Online</source><source>JSTOR Archive Collection A-Z Listing</source><creator>Lunz, Mary E. ; Bergstrom, Betty A.</creator><creatorcontrib>Lunz, Mary E. ; Bergstrom, Betty A.</creatorcontrib><description>This empirical study was designed to determine the impact of computerized adaptive test (CAT) administration formats on student performance. Students in medical technology programs took a paper-and-pencil and an individualized, computerized adaptive test. Students were randomly assigned to adaptive test administration formats to ascertain the effect on student performance of altering: (a) the difficulty of the first item, (b) the targeted level of test difficulty, (c) minimum test length, and (d) the opportunity to control the test. Computerized adaptive test data were analyzed with ANCOVA. The paper-and-pencil test was used as a covariate to equalize ability variance among cells. The only significant main effect was for opportunity to control the test. There were no significant interactions among test administration formats. This study provides evidence concerning adjusting traditional computerized adaptive testing to more familiar testing modalities.</description><identifier>ISSN: 0022-0655</identifier><identifier>EISSN: 1745-3984</identifier><identifier>DOI: 10.1111/j.1745-3984.1994.tb00446.x</identifier><language>eng</language><publisher>Oxford, UK: Blackwell Publishing Ltd</publisher><subject>Academic Achievement ; Adaptive Testing ; Algorithms ; Analysis of Covariance ; Blood banks ; Computer Assisted Testing ; Empirical Research ; Individual Testing ; Item Response Theory ; Medical Technologists ; Medical technology ; Observational studies ; Paper and Pencil Tests ; Pass fail grading ; Pencils ; Point estimators ; Psychometrics ; Standard deviation ; Statistical variance ; Test Format</subject><ispartof>Journal of educational measurement, 1994-09, Vol.31 (3), p.251-263</ispartof><rights>Copyright 1994 National Council on Measurement in Education</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4271-f1d5181ac386202ed2c719f20ac5a591b27e1e98d8042acb04f75cfd87f2e3fd3</citedby><cites>FETCH-LOGICAL-c4271-f1d5181ac386202ed2c719f20ac5a591b27e1e98d8042acb04f75cfd87f2e3fd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/1435269$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/1435269$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,780,784,803,27869,27924,27925,58017,58250</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ497069$$DView record in ERIC$$Hfree_for_read</backlink></links><search><creatorcontrib>Lunz, Mary E.</creatorcontrib><creatorcontrib>Bergstrom, Betty A.</creatorcontrib><title>An Empirical Study of Computerized Adaptive Test Administration Conditions</title><title>Journal of educational measurement</title><description>This empirical study was designed to determine the impact of computerized adaptive test (CAT) administration formats on student performance. Students in medical technology programs took a paper-and-pencil and an individualized, computerized adaptive test. Students were randomly assigned to adaptive test administration formats to ascertain the effect on student performance of altering: (a) the difficulty of the first item, (b) the targeted level of test difficulty, (c) minimum test length, and (d) the opportunity to control the test. Computerized adaptive test data were analyzed with ANCOVA. The paper-and-pencil test was used as a covariate to equalize ability variance among cells. The only significant main effect was for opportunity to control the test. There were no significant interactions among test administration formats. This study provides evidence concerning adjusting traditional computerized adaptive testing to more familiar testing modalities.</description><subject>Academic Achievement</subject><subject>Adaptive Testing</subject><subject>Algorithms</subject><subject>Analysis of Covariance</subject><subject>Blood banks</subject><subject>Computer Assisted Testing</subject><subject>Empirical Research</subject><subject>Individual Testing</subject><subject>Item Response Theory</subject><subject>Medical Technologists</subject><subject>Medical technology</subject><subject>Observational studies</subject><subject>Paper and Pencil Tests</subject><subject>Pass fail grading</subject><subject>Pencils</subject><subject>Point estimators</subject><subject>Psychometrics</subject><subject>Standard deviation</subject><subject>Statistical variance</subject><subject>Test Format</subject><issn>0022-0655</issn><issn>1745-3984</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1994</creationdate><recordtype>article</recordtype><sourceid>K30</sourceid><recordid>eNqVkMtO4zAYhS00SHSAN2ARMetkbMeO49mMqlLKlAILbkvL9UVypk2CnUI7Tz8OQWWNN7Z1zn-O_QFwjmCG4vpZZYgRmua8JBninGTdEkJCimx7AEZ76RsYQYhxCgtKj8D3ECoIEWUUjcB8XCfTdeu8U3KV3HcbvUsam0yadbvpjHf_jE7GWradezXJgwldvK1d7ULnZeeaOjpr7fpTOAGHVq6COf3Yj8Hj5fRhcpUu7mZ_JuNFqghmKLVIU1QiqfKywBAbjRVD3GIoFZWUoyVmBhle6hISLNUSEsuosrpkFpvc6vwY_BhyW9-8bOKTRNVsfB0rBcKc9r_mOLp-DS7lmxC8saL1bi39TiAoenaiEr1V9IBEz058sBPbOHw2DEcCaj84nRPOYMGj_HuQ39zK7L4QLObTixtM0WdBFbrGfyaQnOL3gnSQI2ez3cvS_xUFyxkVz7czsZiVT_ezq2txmf8H9DiYxw</recordid><startdate>199409</startdate><enddate>199409</enddate><creator>Lunz, Mary E.</creator><creator>Bergstrom, Betty A.</creator><general>Blackwell Publishing Ltd</general><general>National Council on Measurement in Education</general><scope>BSCLL</scope><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JILTI</scope><scope>K30</scope><scope>PAAUG</scope><scope>PAWHS</scope><scope>PAWZZ</scope><scope>PAXOH</scope><scope>PBHAV</scope><scope>PBQSW</scope><scope>PBYQZ</scope><scope>PCIWU</scope><scope>PCMID</scope><scope>PCZJX</scope><scope>PDGRG</scope><scope>PDWWI</scope><scope>PETMR</scope><scope>PFVGT</scope><scope>PGXDX</scope><scope>PIHIL</scope><scope>PISVA</scope><scope>PJCTQ</scope><scope>PJTMS</scope><scope>PLCHJ</scope><scope>PMHAD</scope><scope>PNQDJ</scope><scope>POUND</scope><scope>PPLAD</scope><scope>PQAPC</scope><scope>PQCAN</scope><scope>PQCMW</scope><scope>PQEME</scope><scope>PQHKH</scope><scope>PQMID</scope><scope>PQNCT</scope><scope>PQNET</scope><scope>PQSCT</scope><scope>PQSET</scope><scope>PSVJG</scope><scope>PVMQY</scope><scope>PZGFC</scope></search><sort><creationdate>199409</creationdate><title>An Empirical Study of Computerized Adaptive Test Administration Conditions</title><author>Lunz, Mary E. ; Bergstrom, Betty A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4271-f1d5181ac386202ed2c719f20ac5a591b27e1e98d8042acb04f75cfd87f2e3fd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1994</creationdate><topic>Academic Achievement</topic><topic>Adaptive Testing</topic><topic>Algorithms</topic><topic>Analysis of Covariance</topic><topic>Blood banks</topic><topic>Computer Assisted Testing</topic><topic>Empirical Research</topic><topic>Individual Testing</topic><topic>Item Response Theory</topic><topic>Medical Technologists</topic><topic>Medical technology</topic><topic>Observational studies</topic><topic>Paper and Pencil Tests</topic><topic>Pass fail grading</topic><topic>Pencils</topic><topic>Point estimators</topic><topic>Psychometrics</topic><topic>Standard deviation</topic><topic>Statistical variance</topic><topic>Test Format</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lunz, Mary E.</creatorcontrib><creatorcontrib>Bergstrom, Betty A.</creatorcontrib><collection>Istex</collection><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>CrossRef</collection><collection>Periodicals Index Online Segment 32</collection><collection>Periodicals Index Online</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - West</collection><collection>Primary Sources Access (Plan D) - International</collection><collection>Primary Sources Access & Build (Plan A) - MEA</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - Midwest</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - Northeast</collection><collection>Primary Sources Access (Plan D) - Southeast</collection><collection>Primary Sources Access (Plan D) - North Central</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - Southeast</collection><collection>Primary Sources Access (Plan D) - South Central</collection><collection>Primary Sources Access & Build (Plan A) - UK / I</collection><collection>Primary Sources Access (Plan D) - Canada</collection><collection>Primary Sources Access (Plan D) - EMEALA</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - North Central</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - South Central</collection><collection>Primary Sources Access & Build (Plan A) - International</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - International</collection><collection>Primary Sources Access (Plan D) - West</collection><collection>Periodicals Index Online Segments 1-50</collection><collection>Primary Sources Access (Plan D) - APAC</collection><collection>Primary Sources Access (Plan D) - Midwest</collection><collection>Primary Sources Access (Plan D) - MEA</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - Canada</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - UK / I</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - EMEALA</collection><collection>Primary Sources Access & Build (Plan A) - APAC</collection><collection>Primary Sources Access & Build (Plan A) - Canada</collection><collection>Primary Sources Access & Build (Plan A) - West</collection><collection>Primary Sources Access & Build (Plan A) - EMEALA</collection><collection>Primary Sources Access (Plan D) - Northeast</collection><collection>Primary Sources Access & Build (Plan A) - Midwest</collection><collection>Primary Sources Access & Build (Plan A) - North Central</collection><collection>Primary Sources Access & Build (Plan A) - Northeast</collection><collection>Primary Sources Access & Build (Plan A) - South Central</collection><collection>Primary Sources Access & Build (Plan A) - Southeast</collection><collection>Primary Sources Access (Plan D) - UK / I</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - APAC</collection><collection>Primary Sources Access—Foundation Edition (Plan E) - MEA</collection><jtitle>Journal of educational measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lunz, Mary E.</au><au>Bergstrom, Betty A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ497069</ericid><atitle>An Empirical Study of Computerized Adaptive Test Administration Conditions</atitle><jtitle>Journal of educational measurement</jtitle><date>1994-09</date><risdate>1994</risdate><volume>31</volume><issue>3</issue><spage>251</spage><epage>263</epage><pages>251-263</pages><issn>0022-0655</issn><eissn>1745-3984</eissn><abstract>This empirical study was designed to determine the impact of computerized adaptive test (CAT) administration formats on student performance. Students in medical technology programs took a paper-and-pencil and an individualized, computerized adaptive test. Students were randomly assigned to adaptive test administration formats to ascertain the effect on student performance of altering: (a) the difficulty of the first item, (b) the targeted level of test difficulty, (c) minimum test length, and (d) the opportunity to control the test. Computerized adaptive test data were analyzed with ANCOVA. The paper-and-pencil test was used as a covariate to equalize ability variance among cells. The only significant main effect was for opportunity to control the test. There were no significant interactions among test administration formats. This study provides evidence concerning adjusting traditional computerized adaptive testing to more familiar testing modalities.</abstract><cop>Oxford, UK</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/j.1745-3984.1994.tb00446.x</doi><tpages>13</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0022-0655 |
ispartof | Journal of educational measurement, 1994-09, Vol.31 (3), p.251-263 |
issn | 0022-0655 1745-3984 |
language | eng |
recordid | cdi_proquest_journals_1295174592 |
source | Periodicals Index Online; JSTOR Archive Collection A-Z Listing |
subjects | Academic Achievement Adaptive Testing Algorithms Analysis of Covariance Blood banks Computer Assisted Testing Empirical Research Individual Testing Item Response Theory Medical Technologists Medical technology Observational studies Paper and Pencil Tests Pass fail grading Pencils Point estimators Psychometrics Standard deviation Statistical variance Test Format |
title | An Empirical Study of Computerized Adaptive Test Administration Conditions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T03%3A31%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Empirical%20Study%20of%20Computerized%20Adaptive%20Test%20Administration%20Conditions&rft.jtitle=Journal%20of%20educational%20measurement&rft.au=Lunz,%20Mary%20E.&rft.date=1994-09&rft.volume=31&rft.issue=3&rft.spage=251&rft.epage=263&rft.pages=251-263&rft.issn=0022-0655&rft.eissn=1745-3984&rft_id=info:doi/10.1111/j.1745-3984.1994.tb00446.x&rft_dat=%3Cjstor_proqu%3E1435269%3C/jstor_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1295174592&rft_id=info:pmid/&rft_ericid=EJ497069&rft_jstor_id=1435269&rfr_iscdi=true |