Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”

PURPOSETo examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs). METHODThe authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Academic Medicine 2016-11, Vol.91 (11), p.1540-1545
Hauptverfasser: Boysen Osborn, Megan, Mattson, James, Yanuck, Justin, Anderson, Craig, Tekian, Ara, Fox, John Christian, Harris, Ilene B.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1545
container_issue 11
container_start_page 1540
container_title Academic Medicine
container_volume 91
creator Boysen Osborn, Megan
Mattson, James
Yanuck, Justin
Anderson, Craig
Tekian, Ara
Fox, John Christian
Harris, Ilene B.
description PURPOSETo examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs). METHODThe authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residency programs during 2012–2013 and 2014–2015. They recorded whether the school used a ranking system, the type of ranking system used, the size and description of student categories, the location of the ranking statement and category legend, and whether nonranking schools used language suggestive of rank. RESULTSOf the 134 medical schools in the study sample, the majority (n = 101; 75%) provided ranks for students in the MSPE. Most of the ranking schools (n = 63; 62%) placed students into named category groups, but the number and size of groups varied. The most common descriptors used for these 63 schools’ top, second, third, and lowest groups were “outstanding,” “excellent,” “very good,” and “good,” respectively, but each of these terms was used across a broad range of percentile ranks. Student ranks and school category legends were found in various locations. Many of the 33 schools that did not rank students included language suggestive of rank. CONCLUSIONSThere is extensive variation in ranking systems used in MSPEs. Program directors may find it difficult to use MSPEs to compare applicants, which may diminish the MSPE’s value in the residency application process and negatively affect high-achieving students. A consistent approach to ranking students would benefit program directors, students, and student affairs officers.
doi_str_mv 10.1097/ACM.0000000000001180
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5937982</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1826667426</sourcerecordid><originalsourceid>FETCH-LOGICAL-c5020-4d610ba1e9c5646c43ca61dbf793f6d8052dd640c505fd9e1240a53b79f9d96b3</originalsourceid><addsrcrecordid>eNqFkc9qFTEUxoMotlbfQCRLF05NMpn8cSHUS1sLLRar4i5kkkxvbO6kJpmW7u5rFPTl7pMYvbVUFxo45ITz-74c-AB4itE2RpK_3JkdbaM7B2OB7oFNLFvRCCQ-3689oqghlLIN8CjnLxVivGsfgg3CEe-olJvAvdfjmR9P4XHSpnjj4CedvO598OUK-hGWuYNHznqjAzwpk3VjgccuDTEt9Fjx3QsdJl18HF_BkwjfaPsCHpTV8jrD1fLbfox2tfz-GDwYdMjuyc29BT7u7X6YvW0O3-0fzHYOG9MhghpqGUa9xk6ajlFmaGs0w7YfuGwHZgXqiLWMokp3g5UOE4p01_ZcDtJK1rdb4PXa93zqF86aumzSQZ0nv9DpSkXt1Z-T0c_VabxQnWy5FKQaPL8xSPHr5HJRC5-NC0GPLk5ZYUEYY5wSVlG6Rk2KOSc33H6DkfqZkKoJqb8TqrJnd1e8Ff2OpAJiDVzGUFzKZ2G6dEnNnQ5l_j9v-g_pL0wI0RCEGcb11dQivP0BkHSv6g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1826667426</pqid></control><display><type>article</type><title>Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”</title><source>MEDLINE</source><source>Journals@Ovid LWW Legacy Archive</source><source>Alma/SFX Local Collection</source><creator>Boysen Osborn, Megan ; Mattson, James ; Yanuck, Justin ; Anderson, Craig ; Tekian, Ara ; Fox, John Christian ; Harris, Ilene B.</creator><creatorcontrib>Boysen Osborn, Megan ; Mattson, James ; Yanuck, Justin ; Anderson, Craig ; Tekian, Ara ; Fox, John Christian ; Harris, Ilene B.</creatorcontrib><description>PURPOSETo examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs). METHODThe authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residency programs during 2012–2013 and 2014–2015. They recorded whether the school used a ranking system, the type of ranking system used, the size and description of student categories, the location of the ranking statement and category legend, and whether nonranking schools used language suggestive of rank. RESULTSOf the 134 medical schools in the study sample, the majority (n = 101; 75%) provided ranks for students in the MSPE. Most of the ranking schools (n = 63; 62%) placed students into named category groups, but the number and size of groups varied. The most common descriptors used for these 63 schools’ top, second, third, and lowest groups were “outstanding,” “excellent,” “very good,” and “good,” respectively, but each of these terms was used across a broad range of percentile ranks. Student ranks and school category legends were found in various locations. Many of the 33 schools that did not rank students included language suggestive of rank. CONCLUSIONSThere is extensive variation in ranking systems used in MSPEs. Program directors may find it difficult to use MSPEs to compare applicants, which may diminish the MSPE’s value in the residency application process and negatively affect high-achieving students. A consistent approach to ranking students would benefit program directors, students, and student affairs officers.</description><identifier>ISSN: 1040-2446</identifier><identifier>EISSN: 1938-808X</identifier><identifier>DOI: 10.1097/ACM.0000000000001180</identifier><identifier>PMID: 27075499</identifier><language>eng</language><publisher>United States: by the Association of American Medical Colleges</publisher><subject>Achievement ; Education, Medical, Undergraduate ; Educational Measurement - methods ; Educational Measurement - statistics &amp; numerical data ; Humans ; Internship and Residency ; School Admission Criteria ; Schools, Medical - statistics &amp; numerical data ; Students, Medical ; United States</subject><ispartof>Academic Medicine, 2016-11, Vol.91 (11), p.1540-1545</ispartof><rights>by the Association of American Medical Colleges</rights><rights>2016 by the Association of American Medical Colleges</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c5020-4d610ba1e9c5646c43ca61dbf793f6d8052dd640c505fd9e1240a53b79f9d96b3</citedby><cites>FETCH-LOGICAL-c5020-4d610ba1e9c5646c43ca61dbf793f6d8052dd640c505fd9e1240a53b79f9d96b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttp://ovidsp.ovid.com/ovidweb.cgi?T=JS&amp;NEWS=n&amp;CSC=Y&amp;PAGE=fulltext&amp;D=ovft&amp;AN=00001888-201611000-00027$$EHTML$$P50$$Gwolterskluwer$$H</linktohtml><link.rule.ids>230,314,776,780,881,4595,27903,27904,65209</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/27075499$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Boysen Osborn, Megan</creatorcontrib><creatorcontrib>Mattson, James</creatorcontrib><creatorcontrib>Yanuck, Justin</creatorcontrib><creatorcontrib>Anderson, Craig</creatorcontrib><creatorcontrib>Tekian, Ara</creatorcontrib><creatorcontrib>Fox, John Christian</creatorcontrib><creatorcontrib>Harris, Ilene B.</creatorcontrib><title>Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”</title><title>Academic Medicine</title><addtitle>Acad Med</addtitle><description>PURPOSETo examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs). METHODThe authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residency programs during 2012–2013 and 2014–2015. They recorded whether the school used a ranking system, the type of ranking system used, the size and description of student categories, the location of the ranking statement and category legend, and whether nonranking schools used language suggestive of rank. RESULTSOf the 134 medical schools in the study sample, the majority (n = 101; 75%) provided ranks for students in the MSPE. Most of the ranking schools (n = 63; 62%) placed students into named category groups, but the number and size of groups varied. The most common descriptors used for these 63 schools’ top, second, third, and lowest groups were “outstanding,” “excellent,” “very good,” and “good,” respectively, but each of these terms was used across a broad range of percentile ranks. Student ranks and school category legends were found in various locations. Many of the 33 schools that did not rank students included language suggestive of rank. CONCLUSIONSThere is extensive variation in ranking systems used in MSPEs. Program directors may find it difficult to use MSPEs to compare applicants, which may diminish the MSPE’s value in the residency application process and negatively affect high-achieving students. A consistent approach to ranking students would benefit program directors, students, and student affairs officers.</description><subject>Achievement</subject><subject>Education, Medical, Undergraduate</subject><subject>Educational Measurement - methods</subject><subject>Educational Measurement - statistics &amp; numerical data</subject><subject>Humans</subject><subject>Internship and Residency</subject><subject>School Admission Criteria</subject><subject>Schools, Medical - statistics &amp; numerical data</subject><subject>Students, Medical</subject><subject>United States</subject><issn>1040-2446</issn><issn>1938-808X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNqFkc9qFTEUxoMotlbfQCRLF05NMpn8cSHUS1sLLRar4i5kkkxvbO6kJpmW7u5rFPTl7pMYvbVUFxo45ITz-74c-AB4itE2RpK_3JkdbaM7B2OB7oFNLFvRCCQ-3689oqghlLIN8CjnLxVivGsfgg3CEe-olJvAvdfjmR9P4XHSpnjj4CedvO598OUK-hGWuYNHznqjAzwpk3VjgccuDTEt9Fjx3QsdJl18HF_BkwjfaPsCHpTV8jrD1fLbfox2tfz-GDwYdMjuyc29BT7u7X6YvW0O3-0fzHYOG9MhghpqGUa9xk6ajlFmaGs0w7YfuGwHZgXqiLWMokp3g5UOE4p01_ZcDtJK1rdb4PXa93zqF86aumzSQZ0nv9DpSkXt1Z-T0c_VabxQnWy5FKQaPL8xSPHr5HJRC5-NC0GPLk5ZYUEYY5wSVlG6Rk2KOSc33H6DkfqZkKoJqb8TqrJnd1e8Ff2OpAJiDVzGUFzKZ2G6dEnNnQ5l_j9v-g_pL0wI0RCEGcb11dQivP0BkHSv6g</recordid><startdate>20161101</startdate><enddate>20161101</enddate><creator>Boysen Osborn, Megan</creator><creator>Mattson, James</creator><creator>Yanuck, Justin</creator><creator>Anderson, Craig</creator><creator>Tekian, Ara</creator><creator>Fox, John Christian</creator><creator>Harris, Ilene B.</creator><general>by the Association of American Medical Colleges</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20161101</creationdate><title>Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”</title><author>Boysen Osborn, Megan ; Mattson, James ; Yanuck, Justin ; Anderson, Craig ; Tekian, Ara ; Fox, John Christian ; Harris, Ilene B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c5020-4d610ba1e9c5646c43ca61dbf793f6d8052dd640c505fd9e1240a53b79f9d96b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Achievement</topic><topic>Education, Medical, Undergraduate</topic><topic>Educational Measurement - methods</topic><topic>Educational Measurement - statistics &amp; numerical data</topic><topic>Humans</topic><topic>Internship and Residency</topic><topic>School Admission Criteria</topic><topic>Schools, Medical - statistics &amp; numerical data</topic><topic>Students, Medical</topic><topic>United States</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Boysen Osborn, Megan</creatorcontrib><creatorcontrib>Mattson, James</creatorcontrib><creatorcontrib>Yanuck, Justin</creatorcontrib><creatorcontrib>Anderson, Craig</creatorcontrib><creatorcontrib>Tekian, Ara</creatorcontrib><creatorcontrib>Fox, John Christian</creatorcontrib><creatorcontrib>Harris, Ilene B.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Academic Medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Boysen Osborn, Megan</au><au>Mattson, James</au><au>Yanuck, Justin</au><au>Anderson, Craig</au><au>Tekian, Ara</au><au>Fox, John Christian</au><au>Harris, Ilene B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”</atitle><jtitle>Academic Medicine</jtitle><addtitle>Acad Med</addtitle><date>2016-11-01</date><risdate>2016</risdate><volume>91</volume><issue>11</issue><spage>1540</spage><epage>1545</epage><pages>1540-1545</pages><issn>1040-2446</issn><eissn>1938-808X</eissn><abstract>PURPOSETo examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs). METHODThe authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residency programs during 2012–2013 and 2014–2015. They recorded whether the school used a ranking system, the type of ranking system used, the size and description of student categories, the location of the ranking statement and category legend, and whether nonranking schools used language suggestive of rank. RESULTSOf the 134 medical schools in the study sample, the majority (n = 101; 75%) provided ranks for students in the MSPE. Most of the ranking schools (n = 63; 62%) placed students into named category groups, but the number and size of groups varied. The most common descriptors used for these 63 schools’ top, second, third, and lowest groups were “outstanding,” “excellent,” “very good,” and “good,” respectively, but each of these terms was used across a broad range of percentile ranks. Student ranks and school category legends were found in various locations. Many of the 33 schools that did not rank students included language suggestive of rank. CONCLUSIONSThere is extensive variation in ranking systems used in MSPEs. Program directors may find it difficult to use MSPEs to compare applicants, which may diminish the MSPE’s value in the residency application process and negatively affect high-achieving students. A consistent approach to ranking students would benefit program directors, students, and student affairs officers.</abstract><cop>United States</cop><pub>by the Association of American Medical Colleges</pub><pmid>27075499</pmid><doi>10.1097/ACM.0000000000001180</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1040-2446
ispartof Academic Medicine, 2016-11, Vol.91 (11), p.1540-1545
issn 1040-2446
1938-808X
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5937982
source MEDLINE; Journals@Ovid LWW Legacy Archive; Alma/SFX Local Collection
subjects Achievement
Education, Medical, Undergraduate
Educational Measurement - methods
Educational Measurement - statistics & numerical data
Humans
Internship and Residency
School Admission Criteria
Schools, Medical - statistics & numerical data
Students, Medical
United States
title Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T19%3A01%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Ranking%20Practice%20Variability%20in%20the%20Medical%20Student%20Performance%20Evaluation:%20So%20Bad,%20It%E2%80%99s%20%E2%80%9CGood%E2%80%9D&rft.jtitle=Academic%20Medicine&rft.au=Boysen%20Osborn,%20Megan&rft.date=2016-11-01&rft.volume=91&rft.issue=11&rft.spage=1540&rft.epage=1545&rft.pages=1540-1545&rft.issn=1040-2446&rft.eissn=1938-808X&rft_id=info:doi/10.1097/ACM.0000000000001180&rft_dat=%3Cproquest_pubme%3E1826667426%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1826667426&rft_id=info:pmid/27075499&rfr_iscdi=true