Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure

When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Educational psychology review 2014-09, Vol.26 (3), p.403-424
Hauptverfasser: O'Reilly, Tenaha, Weeks, Jonathan, Sabatini, John, Halderman, Laura, Steinberg, Jonathan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 424
container_issue 3
container_start_page 403
container_title Educational psychology review
container_volume 26
creator O'Reilly, Tenaha
Weeks, Jonathan
Sabatini, John
Halderman, Laura
Steinberg, Jonathan
description When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention. Many interventions fail to demonstrate significant improvements on standardized measures of reading comprehension. Although there are a number of reasons to explain this phenomenon, an important one to consider is misalignment between the nature of the outcome assessment and the targets of the intervention. In this study, we present data on three theoretically driven summative reading assessments that were developed in consultation with a research and evaluation team conducting an intervention study. The reading intervention, Reading Apprenticeship, involved instructing teachers to use disciplinary strategies in three domains: literature, history, and science. Factor analyses and other psychometric analyses on data from over 12,000 high school students revealed the assessments had adequate reliability, moderate correlations with state reading test scores and measures of background knowledge, a large general reading factor, and some preliminary evidence for separate, smaller factors specific to each form. In this paper, we describe the empirical work that motivated the assessments, the aims of the intervention, and the process used to develop the new assessments. Implications for intervention and assessment are discussed.
doi_str_mv 10.1007/s10648-014-9269-z
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_1667936610</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A713722149</galeid><ericid>EJ1040792</ericid><jstor_id>43548433</jstor_id><sourcerecordid>A713722149</sourcerecordid><originalsourceid>FETCH-LOGICAL-c465t-a9908c5cac62a0bb28d98011fcdb64f43bd2ad74d7f1c498c1840c84ef6b480e3</originalsourceid><addsrcrecordid>eNqFkUFr3DAQhU1poWnaH9BDQdBLL04lS5at3pZN2qQkBNoUchOyPN5osaWtRpuS_In-5cq4LKGXnmaY-d7jwSuKt4yeMEqbj8ioFG1JmShVJVX5-Kw4YnXDS87l7fO8U0HLppK3L4tXiFtKqWoEPyp-nwK6jXd-Q76B6ee5DtMuwh14dMGTFSIgTuATkiHEA3XhE8T7fM4QfiLn4Rcx5OYOQoTkrBnHB3IVkrs3CfonJmRtPPk-K4lBkvfrfbJhAnIFBvcRXhcvBjMivPk7j4sfn89u1ufl5fWXi_XqsrRC1qk0StHW1tZYWRnadVXbq5YyNti-k2IQvOsr0zeibwZmhWotawW1rYBBdqKlwI-LD4vvLoafe8CkJ4cWxtF4CHvUTMpGcSkZzej7f9Bt2Eef02lW17JiUjGVqZOF2pgRtPNDSNHkfKaHydngYXD5vmoYb6qKiVnAFoGNATHCoHfRTSY-aEb13KleOtW5Uz13qh-z5t2igejsgT_7OrfbqCr_q-WP-ec3EJ9E_b_pFlOIB1fBa9EKzvkfP8i5_Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1556216919</pqid></control><display><type>article</type><title>Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure</title><source>Jstor Complete Legacy</source><source>Education Source</source><source>SpringerLink Journals - AutoHoldings</source><creator>O'Reilly, Tenaha ; Weeks, Jonathan ; Sabatini, John ; Halderman, Laura ; Steinberg, Jonathan</creator><creatorcontrib>O'Reilly, Tenaha ; Weeks, Jonathan ; Sabatini, John ; Halderman, Laura ; Steinberg, Jonathan</creatorcontrib><description>When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention. Many interventions fail to demonstrate significant improvements on standardized measures of reading comprehension. Although there are a number of reasons to explain this phenomenon, an important one to consider is misalignment between the nature of the outcome assessment and the targets of the intervention. In this study, we present data on three theoretically driven summative reading assessments that were developed in consultation with a research and evaluation team conducting an intervention study. The reading intervention, Reading Apprenticeship, involved instructing teachers to use disciplinary strategies in three domains: literature, history, and science. Factor analyses and other psychometric analyses on data from over 12,000 high school students revealed the assessments had adequate reliability, moderate correlations with state reading test scores and measures of background knowledge, a large general reading factor, and some preliminary evidence for separate, smaller factors specific to each form. In this paper, we describe the empirical work that motivated the assessments, the aims of the intervention, and the process used to develop the new assessments. Implications for intervention and assessment are discussed.</description><identifier>ISSN: 1040-726X</identifier><identifier>EISSN: 1573-336X</identifier><identifier>DOI: 10.1007/s10648-014-9269-z</identifier><identifier>CODEN: EPSREO</identifier><language>eng</language><publisher>Boston: Springer</publisher><subject>Apprenticeships ; Child and School Psychology ; Correlation ; Correlations ; Education ; Educational Psychology ; Educational research ; Factor Analysis ; High School Students ; History ; Instructional design ; Intervention ; Language comprehension ; Learning and Instruction ; Literacy ; Literature ; Outcome Measures ; Psychometrics ; Reading ability ; Reading Achievement ; Reading Comprehension ; Reading Instruction ; Reading research ; Reading Skills ; Reading Tests ; RESEARCH INTO PRACTICE ; Sciences ; Scores ; Standardized Tests ; Statistical variance ; Summative Evaluation ; Test Reliability</subject><ispartof>Educational psychology review, 2014-09, Vol.26 (3), p.403-424</ispartof><rights>Springer Science+Business Media 2014</rights><rights>Springer Science+Business Media New York 2014</rights><rights>COPYRIGHT 2014 Springer</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c465t-a9908c5cac62a0bb28d98011fcdb64f43bd2ad74d7f1c498c1840c84ef6b480e3</citedby><cites>FETCH-LOGICAL-c465t-a9908c5cac62a0bb28d98011fcdb64f43bd2ad74d7f1c498c1840c84ef6b480e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/43548433$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/43548433$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,776,780,799,27903,27904,41467,42536,51297,57995,58228</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1040792$$DView record in ERIC$$Hfree_for_read</backlink></links><search><creatorcontrib>O'Reilly, Tenaha</creatorcontrib><creatorcontrib>Weeks, Jonathan</creatorcontrib><creatorcontrib>Sabatini, John</creatorcontrib><creatorcontrib>Halderman, Laura</creatorcontrib><creatorcontrib>Steinberg, Jonathan</creatorcontrib><title>Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure</title><title>Educational psychology review</title><addtitle>Educ Psychol Rev</addtitle><description>When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention. Many interventions fail to demonstrate significant improvements on standardized measures of reading comprehension. Although there are a number of reasons to explain this phenomenon, an important one to consider is misalignment between the nature of the outcome assessment and the targets of the intervention. In this study, we present data on three theoretically driven summative reading assessments that were developed in consultation with a research and evaluation team conducting an intervention study. The reading intervention, Reading Apprenticeship, involved instructing teachers to use disciplinary strategies in three domains: literature, history, and science. Factor analyses and other psychometric analyses on data from over 12,000 high school students revealed the assessments had adequate reliability, moderate correlations with state reading test scores and measures of background knowledge, a large general reading factor, and some preliminary evidence for separate, smaller factors specific to each form. In this paper, we describe the empirical work that motivated the assessments, the aims of the intervention, and the process used to develop the new assessments. Implications for intervention and assessment are discussed.</description><subject>Apprenticeships</subject><subject>Child and School Psychology</subject><subject>Correlation</subject><subject>Correlations</subject><subject>Education</subject><subject>Educational Psychology</subject><subject>Educational research</subject><subject>Factor Analysis</subject><subject>High School Students</subject><subject>History</subject><subject>Instructional design</subject><subject>Intervention</subject><subject>Language comprehension</subject><subject>Learning and Instruction</subject><subject>Literacy</subject><subject>Literature</subject><subject>Outcome Measures</subject><subject>Psychometrics</subject><subject>Reading ability</subject><subject>Reading Achievement</subject><subject>Reading Comprehension</subject><subject>Reading Instruction</subject><subject>Reading research</subject><subject>Reading Skills</subject><subject>Reading Tests</subject><subject>RESEARCH INTO PRACTICE</subject><subject>Sciences</subject><subject>Scores</subject><subject>Standardized Tests</subject><subject>Statistical variance</subject><subject>Summative Evaluation</subject><subject>Test Reliability</subject><issn>1040-726X</issn><issn>1573-336X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNqFkUFr3DAQhU1poWnaH9BDQdBLL04lS5at3pZN2qQkBNoUchOyPN5osaWtRpuS_In-5cq4LKGXnmaY-d7jwSuKt4yeMEqbj8ioFG1JmShVJVX5-Kw4YnXDS87l7fO8U0HLppK3L4tXiFtKqWoEPyp-nwK6jXd-Q76B6ee5DtMuwh14dMGTFSIgTuATkiHEA3XhE8T7fM4QfiLn4Rcx5OYOQoTkrBnHB3IVkrs3CfonJmRtPPk-K4lBkvfrfbJhAnIFBvcRXhcvBjMivPk7j4sfn89u1ufl5fWXi_XqsrRC1qk0StHW1tZYWRnadVXbq5YyNti-k2IQvOsr0zeibwZmhWotawW1rYBBdqKlwI-LD4vvLoafe8CkJ4cWxtF4CHvUTMpGcSkZzej7f9Bt2Eef02lW17JiUjGVqZOF2pgRtPNDSNHkfKaHydngYXD5vmoYb6qKiVnAFoGNATHCoHfRTSY-aEb13KleOtW5Uz13qh-z5t2igejsgT_7OrfbqCr_q-WP-ec3EJ9E_b_pFlOIB1fBa9EKzvkfP8i5_Q</recordid><startdate>20140901</startdate><enddate>20140901</enddate><creator>O'Reilly, Tenaha</creator><creator>Weeks, Jonathan</creator><creator>Sabatini, John</creator><creator>Halderman, Laura</creator><creator>Steinberg, Jonathan</creator><general>Springer</general><general>Springer US</general><general>Springer Nature B.V</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7XB</scope><scope>88B</scope><scope>88G</scope><scope>8A4</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>CJNVE</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>M0P</scope><scope>M2M</scope><scope>PQEDU</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>7T9</scope></search><sort><creationdate>20140901</creationdate><title>Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure</title><author>O'Reilly, Tenaha ; Weeks, Jonathan ; Sabatini, John ; Halderman, Laura ; Steinberg, Jonathan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c465t-a9908c5cac62a0bb28d98011fcdb64f43bd2ad74d7f1c498c1840c84ef6b480e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Apprenticeships</topic><topic>Child and School Psychology</topic><topic>Correlation</topic><topic>Correlations</topic><topic>Education</topic><topic>Educational Psychology</topic><topic>Educational research</topic><topic>Factor Analysis</topic><topic>High School Students</topic><topic>History</topic><topic>Instructional design</topic><topic>Intervention</topic><topic>Language comprehension</topic><topic>Learning and Instruction</topic><topic>Literacy</topic><topic>Literature</topic><topic>Outcome Measures</topic><topic>Psychometrics</topic><topic>Reading ability</topic><topic>Reading Achievement</topic><topic>Reading Comprehension</topic><topic>Reading Instruction</topic><topic>Reading research</topic><topic>Reading Skills</topic><topic>Reading Tests</topic><topic>RESEARCH INTO PRACTICE</topic><topic>Sciences</topic><topic>Scores</topic><topic>Standardized Tests</topic><topic>Statistical variance</topic><topic>Summative Evaluation</topic><topic>Test Reliability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>O'Reilly, Tenaha</creatorcontrib><creatorcontrib>Weeks, Jonathan</creatorcontrib><creatorcontrib>Sabatini, John</creatorcontrib><creatorcontrib>Halderman, Laura</creatorcontrib><creatorcontrib>Steinberg, Jonathan</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Education Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Education Periodicals</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>Education Collection</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Education Database</collection><collection>ProQuest Psychology</collection><collection>ProQuest One Education</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><jtitle>Educational psychology review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>O'Reilly, Tenaha</au><au>Weeks, Jonathan</au><au>Sabatini, John</au><au>Halderman, Laura</au><au>Steinberg, Jonathan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1040792</ericid><atitle>Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure</atitle><jtitle>Educational psychology review</jtitle><stitle>Educ Psychol Rev</stitle><date>2014-09-01</date><risdate>2014</risdate><volume>26</volume><issue>3</issue><spage>403</spage><epage>424</epage><pages>403-424</pages><issn>1040-726X</issn><eissn>1573-336X</eissn><coden>EPSREO</coden><abstract>When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention. Many interventions fail to demonstrate significant improvements on standardized measures of reading comprehension. Although there are a number of reasons to explain this phenomenon, an important one to consider is misalignment between the nature of the outcome assessment and the targets of the intervention. In this study, we present data on three theoretically driven summative reading assessments that were developed in consultation with a research and evaluation team conducting an intervention study. The reading intervention, Reading Apprenticeship, involved instructing teachers to use disciplinary strategies in three domains: literature, history, and science. Factor analyses and other psychometric analyses on data from over 12,000 high school students revealed the assessments had adequate reliability, moderate correlations with state reading test scores and measures of background knowledge, a large general reading factor, and some preliminary evidence for separate, smaller factors specific to each form. In this paper, we describe the empirical work that motivated the assessments, the aims of the intervention, and the process used to develop the new assessments. Implications for intervention and assessment are discussed.</abstract><cop>Boston</cop><pub>Springer</pub><doi>10.1007/s10648-014-9269-z</doi><tpages>22</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1040-726X
ispartof Educational psychology review, 2014-09, Vol.26 (3), p.403-424
issn 1040-726X
1573-336X
language eng
recordid cdi_proquest_miscellaneous_1667936610
source Jstor Complete Legacy; Education Source; SpringerLink Journals - AutoHoldings
subjects Apprenticeships
Child and School Psychology
Correlation
Correlations
Education
Educational Psychology
Educational research
Factor Analysis
High School Students
History
Instructional design
Intervention
Language comprehension
Learning and Instruction
Literacy
Literature
Outcome Measures
Psychometrics
Reading ability
Reading Achievement
Reading Comprehension
Reading Instruction
Reading research
Reading Skills
Reading Tests
RESEARCH INTO PRACTICE
Sciences
Scores
Standardized Tests
Statistical variance
Summative Evaluation
Test Reliability
title Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T04%3A03%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Designing%20Reading%20Comprehension%20Assessments%20for%20Reading%20Interventions:%20How%20a%20Theoretically%20Motivated%20Assessment%20Can%20Serve%20as%20an%20Outcome%20Measure&rft.jtitle=Educational%20psychology%20review&rft.au=O'Reilly,%20Tenaha&rft.date=2014-09-01&rft.volume=26&rft.issue=3&rft.spage=403&rft.epage=424&rft.pages=403-424&rft.issn=1040-726X&rft.eissn=1573-336X&rft.coden=EPSREO&rft_id=info:doi/10.1007/s10648-014-9269-z&rft_dat=%3Cgale_proqu%3EA713722149%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1556216919&rft_id=info:pmid/&rft_galeid=A713722149&rft_ericid=EJ1040792&rft_jstor_id=43548433&rfr_iscdi=true