Validating the standardized-patient assessment administered to medical students in the New York City Consortium
To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related rese...
Gespeichert in:
Veröffentlicht in: | Academic Medicine 1997-07, Vol.72 (7), p.619-26 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 26 |
---|---|
container_issue | 7 |
container_start_page | 619 |
container_title | Academic Medicine |
container_volume | 72 |
creator | Swartz, M H Colliver, J A Bardes, C L Charon, R Fried, E D Moroff, S |
description | To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related research; and to encourage the use of these gold-standard ratings for validation research and examination development, including scoring and standard setting, and for enhancing understanding of the clinical competence construct.
Five faculty physicians independently observed and rated videotaped performances of 44 students from one medical school on the seven SP cases that make up the fourth-year assessment administered at The Morchand Center of Mount Sinai School of Medicine to students in the eight member schools in the new York City Consortium.
The validity coefficients showed correlations between scores on the examination and the overall ratings ranging from .60 to .70. The reliability coefficients for ratings of overall examination performance reached the commonly recommended .80 level and were very close at the case level, with interrater reliabilities generally in the .70 to .80 range.
The results are encouraging, with validity coefficients high enough to warrant optimism about the possibility of increasing them to the recommended .80 level, based on further studies to identify those measurable performance characteristics that most reflect the gold-standard ratings. The high interrater reliabilities indicate that faculty-physician ratings of performance on SP cases and examinations can or may be able to provide a reliable gold standard for validating and refining SP assessment. |
doi_str_mv | 10.1097/00001888-199707000-00014 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_79156504</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>79156504</sourcerecordid><originalsourceid>FETCH-LOGICAL-c360t-87f333e63b1d4685d2f4af02292d3e405b3a375c5fe00fa84f9a66bcbbe9ddb63</originalsourceid><addsrcrecordid>eNo9kMtOxDAMRbMADcPAJyBlxa6QNGnSLlHFSxrBBpBYRWnjQKCPIUmFhq8n8wBvbF_72tJBCFNyQUklL0kKWpZlRqtKEpm6bKPwAzSnhJMs51wcoeMQPpIsZMFmaFblTHCZz9H4ojtndHTDG47vgEPUg9HeuB8w2SrpMESsQ4AQ-m1peje4EMGDwXHEPRjX6i75JpPmAbthe-cBvvHr6D9x7eIa1-MQRh_d1J-gQ6u7AKf7vEDPN9dP9V22fLy9r6-WWcsEiVkpLWMMBGuo4aIsTG65tiTPq9ww4KRomGayaAsLhFhdcltpIZq2aaAyphFsgc53d1d-_JogRNW70ELX6QHGKShZ0UIUhKfFcrfY-jEED1atvOu1XytK1Iav-uOr_vmqLd9kPdv_mJrE4d-4h8t-ATCXeow</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>79156504</pqid></control><display><type>article</type><title>Validating the standardized-patient assessment administered to medical students in the New York City Consortium</title><source>MEDLINE</source><source>Journals@Ovid LWW Legacy Archive</source><source>Alma/SFX Local Collection</source><creator>Swartz, M H ; Colliver, J A ; Bardes, C L ; Charon, R ; Fried, E D ; Moroff, S</creator><creatorcontrib>Swartz, M H ; Colliver, J A ; Bardes, C L ; Charon, R ; Fried, E D ; Moroff, S</creatorcontrib><description>To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related research; and to encourage the use of these gold-standard ratings for validation research and examination development, including scoring and standard setting, and for enhancing understanding of the clinical competence construct.
Five faculty physicians independently observed and rated videotaped performances of 44 students from one medical school on the seven SP cases that make up the fourth-year assessment administered at The Morchand Center of Mount Sinai School of Medicine to students in the eight member schools in the new York City Consortium.
The validity coefficients showed correlations between scores on the examination and the overall ratings ranging from .60 to .70. The reliability coefficients for ratings of overall examination performance reached the commonly recommended .80 level and were very close at the case level, with interrater reliabilities generally in the .70 to .80 range.
The results are encouraging, with validity coefficients high enough to warrant optimism about the possibility of increasing them to the recommended .80 level, based on further studies to identify those measurable performance characteristics that most reflect the gold-standard ratings. The high interrater reliabilities indicate that faculty-physician ratings of performance on SP cases and examinations can or may be able to provide a reliable gold standard for validating and refining SP assessment.</description><identifier>ISSN: 1040-2446</identifier><identifier>DOI: 10.1097/00001888-199707000-00014</identifier><identifier>PMID: 9236472</identifier><language>eng</language><publisher>United States</publisher><subject>Clinical Clerkship - standards ; Clinical Competence - standards ; Educational Measurement ; Evaluation Studies as Topic ; New York City ; Physical Examination - standards ; Physician-Patient Relations ; Reproducibility of Results</subject><ispartof>Academic Medicine, 1997-07, Vol.72 (7), p.619-26</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c360t-87f333e63b1d4685d2f4af02292d3e405b3a375c5fe00fa84f9a66bcbbe9ddb63</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/9236472$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Swartz, M H</creatorcontrib><creatorcontrib>Colliver, J A</creatorcontrib><creatorcontrib>Bardes, C L</creatorcontrib><creatorcontrib>Charon, R</creatorcontrib><creatorcontrib>Fried, E D</creatorcontrib><creatorcontrib>Moroff, S</creatorcontrib><title>Validating the standardized-patient assessment administered to medical students in the New York City Consortium</title><title>Academic Medicine</title><addtitle>Acad Med</addtitle><description>To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related research; and to encourage the use of these gold-standard ratings for validation research and examination development, including scoring and standard setting, and for enhancing understanding of the clinical competence construct.
Five faculty physicians independently observed and rated videotaped performances of 44 students from one medical school on the seven SP cases that make up the fourth-year assessment administered at The Morchand Center of Mount Sinai School of Medicine to students in the eight member schools in the new York City Consortium.
The validity coefficients showed correlations between scores on the examination and the overall ratings ranging from .60 to .70. The reliability coefficients for ratings of overall examination performance reached the commonly recommended .80 level and were very close at the case level, with interrater reliabilities generally in the .70 to .80 range.
The results are encouraging, with validity coefficients high enough to warrant optimism about the possibility of increasing them to the recommended .80 level, based on further studies to identify those measurable performance characteristics that most reflect the gold-standard ratings. The high interrater reliabilities indicate that faculty-physician ratings of performance on SP cases and examinations can or may be able to provide a reliable gold standard for validating and refining SP assessment.</description><subject>Clinical Clerkship - standards</subject><subject>Clinical Competence - standards</subject><subject>Educational Measurement</subject><subject>Evaluation Studies as Topic</subject><subject>New York City</subject><subject>Physical Examination - standards</subject><subject>Physician-Patient Relations</subject><subject>Reproducibility of Results</subject><issn>1040-2446</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1997</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNo9kMtOxDAMRbMADcPAJyBlxa6QNGnSLlHFSxrBBpBYRWnjQKCPIUmFhq8n8wBvbF_72tJBCFNyQUklL0kKWpZlRqtKEpm6bKPwAzSnhJMs51wcoeMQPpIsZMFmaFblTHCZz9H4ojtndHTDG47vgEPUg9HeuB8w2SrpMESsQ4AQ-m1peje4EMGDwXHEPRjX6i75JpPmAbthe-cBvvHr6D9x7eIa1-MQRh_d1J-gQ6u7AKf7vEDPN9dP9V22fLy9r6-WWcsEiVkpLWMMBGuo4aIsTG65tiTPq9ww4KRomGayaAsLhFhdcltpIZq2aaAyphFsgc53d1d-_JogRNW70ELX6QHGKShZ0UIUhKfFcrfY-jEED1atvOu1XytK1Iav-uOr_vmqLd9kPdv_mJrE4d-4h8t-ATCXeow</recordid><startdate>19970701</startdate><enddate>19970701</enddate><creator>Swartz, M H</creator><creator>Colliver, J A</creator><creator>Bardes, C L</creator><creator>Charon, R</creator><creator>Fried, E D</creator><creator>Moroff, S</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>19970701</creationdate><title>Validating the standardized-patient assessment administered to medical students in the New York City Consortium</title><author>Swartz, M H ; Colliver, J A ; Bardes, C L ; Charon, R ; Fried, E D ; Moroff, S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c360t-87f333e63b1d4685d2f4af02292d3e405b3a375c5fe00fa84f9a66bcbbe9ddb63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Clinical Clerkship - standards</topic><topic>Clinical Competence - standards</topic><topic>Educational Measurement</topic><topic>Evaluation Studies as Topic</topic><topic>New York City</topic><topic>Physical Examination - standards</topic><topic>Physician-Patient Relations</topic><topic>Reproducibility of Results</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Swartz, M H</creatorcontrib><creatorcontrib>Colliver, J A</creatorcontrib><creatorcontrib>Bardes, C L</creatorcontrib><creatorcontrib>Charon, R</creatorcontrib><creatorcontrib>Fried, E D</creatorcontrib><creatorcontrib>Moroff, S</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Academic Medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Swartz, M H</au><au>Colliver, J A</au><au>Bardes, C L</au><au>Charon, R</au><au>Fried, E D</au><au>Moroff, S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Validating the standardized-patient assessment administered to medical students in the New York City Consortium</atitle><jtitle>Academic Medicine</jtitle><addtitle>Acad Med</addtitle><date>1997-07-01</date><risdate>1997</risdate><volume>72</volume><issue>7</issue><spage>619</spage><epage>26</epage><pages>619-26</pages><issn>1040-2446</issn><abstract>To test the criterion validity of existing standardized-patient (SP)-examination scores using global ratings by a panel of faculty-physician observers as the gold-standard criterion; to determine whether such ratings can provide a reliable gold-standard criterion to be used for validity-related research; and to encourage the use of these gold-standard ratings for validation research and examination development, including scoring and standard setting, and for enhancing understanding of the clinical competence construct.
Five faculty physicians independently observed and rated videotaped performances of 44 students from one medical school on the seven SP cases that make up the fourth-year assessment administered at The Morchand Center of Mount Sinai School of Medicine to students in the eight member schools in the new York City Consortium.
The validity coefficients showed correlations between scores on the examination and the overall ratings ranging from .60 to .70. The reliability coefficients for ratings of overall examination performance reached the commonly recommended .80 level and were very close at the case level, with interrater reliabilities generally in the .70 to .80 range.
The results are encouraging, with validity coefficients high enough to warrant optimism about the possibility of increasing them to the recommended .80 level, based on further studies to identify those measurable performance characteristics that most reflect the gold-standard ratings. The high interrater reliabilities indicate that faculty-physician ratings of performance on SP cases and examinations can or may be able to provide a reliable gold standard for validating and refining SP assessment.</abstract><cop>United States</cop><pmid>9236472</pmid><doi>10.1097/00001888-199707000-00014</doi><tpages>-592</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1040-2446 |
ispartof | Academic Medicine, 1997-07, Vol.72 (7), p.619-26 |
issn | 1040-2446 |
language | eng |
recordid | cdi_proquest_miscellaneous_79156504 |
source | MEDLINE; Journals@Ovid LWW Legacy Archive; Alma/SFX Local Collection |
subjects | Clinical Clerkship - standards Clinical Competence - standards Educational Measurement Evaluation Studies as Topic New York City Physical Examination - standards Physician-Patient Relations Reproducibility of Results |
title | Validating the standardized-patient assessment administered to medical students in the New York City Consortium |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T14%3A00%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Validating%20the%20standardized-patient%20assessment%20administered%20to%20medical%20students%20in%20the%20New%20York%20City%20Consortium&rft.jtitle=Academic%20Medicine&rft.au=Swartz,%20M%20H&rft.date=1997-07-01&rft.volume=72&rft.issue=7&rft.spage=619&rft.epage=26&rft.pages=619-26&rft.issn=1040-2446&rft_id=info:doi/10.1097/00001888-199707000-00014&rft_dat=%3Cproquest_cross%3E79156504%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=79156504&rft_id=info:pmid/9236472&rfr_iscdi=true |