Can medical record reviewers reliably identify errors and adverse events in the ED?

Abstract Background Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor. Objectives By measuring the degree of interrater agreement among a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The American journal of emergency medicine 2016-06, Vol.34 (6), p.1043-1048
Hauptverfasser: Klasco, Richard S., MD, Wolfe, Richard E., MD, Lee, Terrance, MD, Anderson, Philip, MD, Jacobson, Lee S., MD, PhD, Solano, Joshua, MD, Edlow, Jonathan, MD, Grossman, Shamai A., MD
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1048
container_issue 6
container_start_page 1043
container_title The American journal of emergency medicine
container_volume 34
creator Klasco, Richard S., MD
Wolfe, Richard E., MD
Lee, Terrance, MD
Anderson, Philip, MD
Jacobson, Lee S., MD, PhD
Solano, Joshua, MD
Edlow, Jonathan, MD
Grossman, Shamai A., MD
description Abstract Background Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor. Objectives By measuring the degree of interrater agreement among a 13-member review board of emergency physicians, we sought to validate the reliability of a chart review–based quality assurance process using computerized screening based on explicit case parameters. Methods All patients presenting to an urban, tertiary care academic medical center emergency department (annual volume of 57,000 patients) between November 2012 and November 2013 were screened electronically. Cases were programmatically flagged for review according to explicit criteria: return within 72 hours, procedural evaluation, floor-to-ICU transfer within 24 hours of admission, death within 24 hours of admission, physician complaints, and patient complaints. Each case was reviewed independently by a 13-member emergency department quality assurance committee all of whom were board certified in emergency medicine and trained in the use of the tool. None of the reviewers were involved in the care of the specific patients reviewed by them. Reviewers used a previously validated 8-point Likert scale to rate the (1) coordination of patient care, (2) presence and severity of adverse events, (3) degree of medical error, and (4) quality of medical judgment. Agreement among reviewers was assessed with the intraclass correlation coefficient (ICC) for each parameter. Results Agreement and the degree of significance for each parameter were as follows: coordination of patient care (ICC = 0.67; P < .001), presence and severity of adverse events (ICC = 0.52; P = .001), degree of medical error (ICC = 0.72; P < .001), and quality of medical judgment (ICC = 0.67; P < .001). Conclusion Agreement in the chart review process can be achieved among physician-reviewers. The degree of agreement attainable is comparable to or superior to that of similar studies reported to date. These results highlight the potential for the use of computerized screening, explicit criteria, and training of expert reviewers to improve the reliability and validity of chart review–based quality assurance.
doi_str_mv 10.1016/j.ajem.2016.03.001
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1795875688</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>1_s2_0_S0735675716001832</els_id><sourcerecordid>4090122301</sourcerecordid><originalsourceid>FETCH-LOGICAL-c439t-1652f378872909cdc0a1061a31710df5c8d06fdd8c49fbfcc1dae211c3cb3e443</originalsourceid><addsrcrecordid>eNp9kU1v1DAQhi1ERbctf4ADssSFS4I_4o9IiAptC1Sq1EPhbHntiXDIJsXOLtp_z0RbQOqB04w07_t6_AwhrzirOeP6XV_7Hra1wL5msmaMPyMrrqSoLDf8OVkxI1WljTKn5KyUHgW8Uc0LcioMU0qzZkXu136kW4gp-IFmCFOOWPYJfkEu2A3Jb4YDTRHGOXUHCjlPOPBjpD7uUQMU9jgrNI10_g70-urygpx0fijw8rGek2-frr-uv1S3d59v1h9vq9DIdq64VqKTxlojWtaGGJjnTHMvcXkWOxVsZLqL0Yam7TZdCDx6EJwHGTYSmkaek7fH3Ic8_dxBmd02lQDD4EeYdsVx0yprlLYWpW-eSPtpl0fcblHhe4rrJVAcVSFPpWTo3ENOW58PjjO3IHe9W5C7Bblj0iFRNL1-jN5tEORfyx_GKHh_FACyQLLZlZBgDAgdgc8uTun_-R-e2MOQxuVeP-AA5d8_XBGOufvl6MvNuUa3lUL-BiaZpdU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1797105164</pqid></control><display><type>article</type><title>Can medical record reviewers reliably identify errors and adverse events in the ED?</title><source>MEDLINE</source><source>Access via ScienceDirect (Elsevier)</source><source>ProQuest Central UK/Ireland</source><creator>Klasco, Richard S., MD ; Wolfe, Richard E., MD ; Lee, Terrance, MD ; Anderson, Philip, MD ; Jacobson, Lee S., MD, PhD ; Solano, Joshua, MD ; Edlow, Jonathan, MD ; Grossman, Shamai A., MD</creator><creatorcontrib>Klasco, Richard S., MD ; Wolfe, Richard E., MD ; Lee, Terrance, MD ; Anderson, Philip, MD ; Jacobson, Lee S., MD, PhD ; Solano, Joshua, MD ; Edlow, Jonathan, MD ; Grossman, Shamai A., MD</creatorcontrib><description>Abstract Background Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor. Objectives By measuring the degree of interrater agreement among a 13-member review board of emergency physicians, we sought to validate the reliability of a chart review–based quality assurance process using computerized screening based on explicit case parameters. Methods All patients presenting to an urban, tertiary care academic medical center emergency department (annual volume of 57,000 patients) between November 2012 and November 2013 were screened electronically. Cases were programmatically flagged for review according to explicit criteria: return within 72 hours, procedural evaluation, floor-to-ICU transfer within 24 hours of admission, death within 24 hours of admission, physician complaints, and patient complaints. Each case was reviewed independently by a 13-member emergency department quality assurance committee all of whom were board certified in emergency medicine and trained in the use of the tool. None of the reviewers were involved in the care of the specific patients reviewed by them. Reviewers used a previously validated 8-point Likert scale to rate the (1) coordination of patient care, (2) presence and severity of adverse events, (3) degree of medical error, and (4) quality of medical judgment. Agreement among reviewers was assessed with the intraclass correlation coefficient (ICC) for each parameter. Results Agreement and the degree of significance for each parameter were as follows: coordination of patient care (ICC = 0.67; P &lt; .001), presence and severity of adverse events (ICC = 0.52; P = .001), degree of medical error (ICC = 0.72; P &lt; .001), and quality of medical judgment (ICC = 0.67; P &lt; .001). Conclusion Agreement in the chart review process can be achieved among physician-reviewers. The degree of agreement attainable is comparable to or superior to that of similar studies reported to date. These results highlight the potential for the use of computerized screening, explicit criteria, and training of expert reviewers to improve the reliability and validity of chart review–based quality assurance.</description><identifier>ISSN: 0735-6757</identifier><identifier>EISSN: 1532-8171</identifier><identifier>DOI: 10.1016/j.ajem.2016.03.001</identifier><identifier>PMID: 27055604</identifier><language>eng</language><publisher>United States: Elsevier Inc</publisher><subject>Agreements ; Automation ; Bias ; Cohort Studies ; Confidence intervals ; Correlation coefficient ; Emergency ; Emergency medical care ; Emergency medical services ; Emergency Service, Hospital ; Humans ; Medical Errors ; Medical Records ; Methods ; Observer Variation ; Quality assurance ; Quality Assurance, Health Care ; Reproducibility of Results ; Studies</subject><ispartof>The American journal of emergency medicine, 2016-06, Vol.34 (6), p.1043-1048</ispartof><rights>2016</rights><rights>Copyright © 2016. Published by Elsevier Inc.</rights><rights>Copyright Elsevier Limited 2016</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c439t-1652f378872909cdc0a1061a31710df5c8d06fdd8c49fbfcc1dae211c3cb3e443</citedby><cites>FETCH-LOGICAL-c439t-1652f378872909cdc0a1061a31710df5c8d06fdd8c49fbfcc1dae211c3cb3e443</cites><orcidid>0000-0002-2596-2519 ; 0000-0001-8408-3189</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/1797105164?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995,64385,64387,64389,72469</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/27055604$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Klasco, Richard S., MD</creatorcontrib><creatorcontrib>Wolfe, Richard E., MD</creatorcontrib><creatorcontrib>Lee, Terrance, MD</creatorcontrib><creatorcontrib>Anderson, Philip, MD</creatorcontrib><creatorcontrib>Jacobson, Lee S., MD, PhD</creatorcontrib><creatorcontrib>Solano, Joshua, MD</creatorcontrib><creatorcontrib>Edlow, Jonathan, MD</creatorcontrib><creatorcontrib>Grossman, Shamai A., MD</creatorcontrib><title>Can medical record reviewers reliably identify errors and adverse events in the ED?</title><title>The American journal of emergency medicine</title><addtitle>Am J Emerg Med</addtitle><description>Abstract Background Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor. Objectives By measuring the degree of interrater agreement among a 13-member review board of emergency physicians, we sought to validate the reliability of a chart review–based quality assurance process using computerized screening based on explicit case parameters. Methods All patients presenting to an urban, tertiary care academic medical center emergency department (annual volume of 57,000 patients) between November 2012 and November 2013 were screened electronically. Cases were programmatically flagged for review according to explicit criteria: return within 72 hours, procedural evaluation, floor-to-ICU transfer within 24 hours of admission, death within 24 hours of admission, physician complaints, and patient complaints. Each case was reviewed independently by a 13-member emergency department quality assurance committee all of whom were board certified in emergency medicine and trained in the use of the tool. None of the reviewers were involved in the care of the specific patients reviewed by them. Reviewers used a previously validated 8-point Likert scale to rate the (1) coordination of patient care, (2) presence and severity of adverse events, (3) degree of medical error, and (4) quality of medical judgment. Agreement among reviewers was assessed with the intraclass correlation coefficient (ICC) for each parameter. Results Agreement and the degree of significance for each parameter were as follows: coordination of patient care (ICC = 0.67; P &lt; .001), presence and severity of adverse events (ICC = 0.52; P = .001), degree of medical error (ICC = 0.72; P &lt; .001), and quality of medical judgment (ICC = 0.67; P &lt; .001). Conclusion Agreement in the chart review process can be achieved among physician-reviewers. The degree of agreement attainable is comparable to or superior to that of similar studies reported to date. These results highlight the potential for the use of computerized screening, explicit criteria, and training of expert reviewers to improve the reliability and validity of chart review–based quality assurance.</description><subject>Agreements</subject><subject>Automation</subject><subject>Bias</subject><subject>Cohort Studies</subject><subject>Confidence intervals</subject><subject>Correlation coefficient</subject><subject>Emergency</subject><subject>Emergency medical care</subject><subject>Emergency medical services</subject><subject>Emergency Service, Hospital</subject><subject>Humans</subject><subject>Medical Errors</subject><subject>Medical Records</subject><subject>Methods</subject><subject>Observer Variation</subject><subject>Quality assurance</subject><subject>Quality Assurance, Health Care</subject><subject>Reproducibility of Results</subject><subject>Studies</subject><issn>0735-6757</issn><issn>1532-8171</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kU1v1DAQhi1ERbctf4ADssSFS4I_4o9IiAptC1Sq1EPhbHntiXDIJsXOLtp_z0RbQOqB04w07_t6_AwhrzirOeP6XV_7Hra1wL5msmaMPyMrrqSoLDf8OVkxI1WljTKn5KyUHgW8Uc0LcioMU0qzZkXu136kW4gp-IFmCFOOWPYJfkEu2A3Jb4YDTRHGOXUHCjlPOPBjpD7uUQMU9jgrNI10_g70-urygpx0fijw8rGek2-frr-uv1S3d59v1h9vq9DIdq64VqKTxlojWtaGGJjnTHMvcXkWOxVsZLqL0Yam7TZdCDx6EJwHGTYSmkaek7fH3Ic8_dxBmd02lQDD4EeYdsVx0yprlLYWpW-eSPtpl0fcblHhe4rrJVAcVSFPpWTo3ENOW58PjjO3IHe9W5C7Bblj0iFRNL1-jN5tEORfyx_GKHh_FACyQLLZlZBgDAgdgc8uTun_-R-e2MOQxuVeP-AA5d8_XBGOufvl6MvNuUa3lUL-BiaZpdU</recordid><startdate>20160601</startdate><enddate>20160601</enddate><creator>Klasco, Richard S., MD</creator><creator>Wolfe, Richard E., MD</creator><creator>Lee, Terrance, MD</creator><creator>Anderson, Philip, MD</creator><creator>Jacobson, Lee S., MD, PhD</creator><creator>Solano, Joshua, MD</creator><creator>Edlow, Jonathan, MD</creator><creator>Grossman, Shamai A., MD</creator><general>Elsevier Inc</general><general>Elsevier Limited</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RV</scope><scope>7T5</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>H94</scope><scope>K9.</scope><scope>KB0</scope><scope>M0S</scope><scope>M1P</scope><scope>M2O</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-2596-2519</orcidid><orcidid>https://orcid.org/0000-0001-8408-3189</orcidid></search><sort><creationdate>20160601</creationdate><title>Can medical record reviewers reliably identify errors and adverse events in the ED?</title><author>Klasco, Richard S., MD ; Wolfe, Richard E., MD ; Lee, Terrance, MD ; Anderson, Philip, MD ; Jacobson, Lee S., MD, PhD ; Solano, Joshua, MD ; Edlow, Jonathan, MD ; Grossman, Shamai A., MD</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c439t-1652f378872909cdc0a1061a31710df5c8d06fdd8c49fbfcc1dae211c3cb3e443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Agreements</topic><topic>Automation</topic><topic>Bias</topic><topic>Cohort Studies</topic><topic>Confidence intervals</topic><topic>Correlation coefficient</topic><topic>Emergency</topic><topic>Emergency medical care</topic><topic>Emergency medical services</topic><topic>Emergency Service, Hospital</topic><topic>Humans</topic><topic>Medical Errors</topic><topic>Medical Records</topic><topic>Methods</topic><topic>Observer Variation</topic><topic>Quality assurance</topic><topic>Quality Assurance, Health Care</topic><topic>Reproducibility of Results</topic><topic>Studies</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Klasco, Richard S., MD</creatorcontrib><creatorcontrib>Wolfe, Richard E., MD</creatorcontrib><creatorcontrib>Lee, Terrance, MD</creatorcontrib><creatorcontrib>Anderson, Philip, MD</creatorcontrib><creatorcontrib>Jacobson, Lee S., MD, PhD</creatorcontrib><creatorcontrib>Solano, Joshua, MD</creatorcontrib><creatorcontrib>Edlow, Jonathan, MD</creatorcontrib><creatorcontrib>Grossman, Shamai A., MD</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Immunology Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>The American journal of emergency medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Klasco, Richard S., MD</au><au>Wolfe, Richard E., MD</au><au>Lee, Terrance, MD</au><au>Anderson, Philip, MD</au><au>Jacobson, Lee S., MD, PhD</au><au>Solano, Joshua, MD</au><au>Edlow, Jonathan, MD</au><au>Grossman, Shamai A., MD</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Can medical record reviewers reliably identify errors and adverse events in the ED?</atitle><jtitle>The American journal of emergency medicine</jtitle><addtitle>Am J Emerg Med</addtitle><date>2016-06-01</date><risdate>2016</risdate><volume>34</volume><issue>6</issue><spage>1043</spage><epage>1048</epage><pages>1043-1048</pages><issn>0735-6757</issn><eissn>1532-8171</eissn><abstract>Abstract Background Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor. Objectives By measuring the degree of interrater agreement among a 13-member review board of emergency physicians, we sought to validate the reliability of a chart review–based quality assurance process using computerized screening based on explicit case parameters. Methods All patients presenting to an urban, tertiary care academic medical center emergency department (annual volume of 57,000 patients) between November 2012 and November 2013 were screened electronically. Cases were programmatically flagged for review according to explicit criteria: return within 72 hours, procedural evaluation, floor-to-ICU transfer within 24 hours of admission, death within 24 hours of admission, physician complaints, and patient complaints. Each case was reviewed independently by a 13-member emergency department quality assurance committee all of whom were board certified in emergency medicine and trained in the use of the tool. None of the reviewers were involved in the care of the specific patients reviewed by them. Reviewers used a previously validated 8-point Likert scale to rate the (1) coordination of patient care, (2) presence and severity of adverse events, (3) degree of medical error, and (4) quality of medical judgment. Agreement among reviewers was assessed with the intraclass correlation coefficient (ICC) for each parameter. Results Agreement and the degree of significance for each parameter were as follows: coordination of patient care (ICC = 0.67; P &lt; .001), presence and severity of adverse events (ICC = 0.52; P = .001), degree of medical error (ICC = 0.72; P &lt; .001), and quality of medical judgment (ICC = 0.67; P &lt; .001). Conclusion Agreement in the chart review process can be achieved among physician-reviewers. The degree of agreement attainable is comparable to or superior to that of similar studies reported to date. These results highlight the potential for the use of computerized screening, explicit criteria, and training of expert reviewers to improve the reliability and validity of chart review–based quality assurance.</abstract><cop>United States</cop><pub>Elsevier Inc</pub><pmid>27055604</pmid><doi>10.1016/j.ajem.2016.03.001</doi><tpages>6</tpages><orcidid>https://orcid.org/0000-0002-2596-2519</orcidid><orcidid>https://orcid.org/0000-0001-8408-3189</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0735-6757
ispartof The American journal of emergency medicine, 2016-06, Vol.34 (6), p.1043-1048
issn 0735-6757
1532-8171
language eng
recordid cdi_proquest_miscellaneous_1795875688
source MEDLINE; Access via ScienceDirect (Elsevier); ProQuest Central UK/Ireland
subjects Agreements
Automation
Bias
Cohort Studies
Confidence intervals
Correlation coefficient
Emergency
Emergency medical care
Emergency medical services
Emergency Service, Hospital
Humans
Medical Errors
Medical Records
Methods
Observer Variation
Quality assurance
Quality Assurance, Health Care
Reproducibility of Results
Studies
title Can medical record reviewers reliably identify errors and adverse events in the ED?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T01%3A51%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Can%20medical%20record%20reviewers%20reliably%20identify%20errors%20and%20adverse%20events%20in%20the%20ED?&rft.jtitle=The%20American%20journal%20of%20emergency%20medicine&rft.au=Klasco,%20Richard%20S.,%20MD&rft.date=2016-06-01&rft.volume=34&rft.issue=6&rft.spage=1043&rft.epage=1048&rft.pages=1043-1048&rft.issn=0735-6757&rft.eissn=1532-8171&rft_id=info:doi/10.1016/j.ajem.2016.03.001&rft_dat=%3Cproquest_cross%3E4090122301%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1797105164&rft_id=info:pmid/27055604&rft_els_id=1_s2_0_S0735675716001832&rfr_iscdi=true