Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination
Objective structured clinical examinations (OSCEs) are a widely used performance assessment for medical and dental students. A common limitation of OSCEs is that the evaluation results depend on the characteristics of raters and a scoring rubric. To overcome this limitation, item response theory (IR...
Gespeichert in:
Veröffentlicht in: | PloS one 2024-09, Vol.19 (9), p.e0309887 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 9 |
container_start_page | e0309887 |
container_title | PloS one |
container_volume | 19 |
creator | Uto, Masaki Tsuruta, Jun Araki, Kouji Ueno, Maomi |
description | Objective structured clinical examinations (OSCEs) are a widely used performance assessment for medical and dental students. A common limitation of OSCEs is that the evaluation results depend on the characteristics of raters and a scoring rubric. To overcome this limitation, item response theory (IRT) models such as the many-facet Rasch model have been proposed to estimate examinee abilities while taking into account the characteristics of raters and evaluation items in a rubric. However, conventional IRT models have two impractical assumptions: constant rater severity across all evaluation items in a rubric and an equal interval rating scale among evaluation items, which can decrease model fitting and ability measurement accuracy. To resolve this problem, we propose a new IRT model that introduces two parameters: (1) a rater-item interaction parameter representing the rater severity for each evaluation item and (2) an item-specific step-difficulty parameter representing the difference in rating scales among evaluation items. We demonstrate the effectiveness of the proposed model by applying it to actual data collected from a medical interview test conducted at Tokyo Medical and Dental University as part of a post-clinical clerkship OSCE. The experimental results showed that the proposed model was well-fitted to our OSCE data and measured ability accurately. Furthermore, it provided abundant information on rater and item characteristics that conventional models cannot, helping us to better understand rater and item properties. |
doi_str_mv | 10.1371/journal.pone.0309887 |
format | Article |
fullrecord | <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_3101516522</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A807537777</galeid><doaj_id>oai_doaj_org_article_d309c73b451c41d88e359f0818d707ef</doaj_id><sourcerecordid>A807537777</sourcerecordid><originalsourceid>FETCH-LOGICAL-c516t-aab4939a94e2ff0e6ad059af32f9396a4b7a29dbb6232c88325e245bf6b54acf3</originalsourceid><addsrcrecordid>eNqNkttq3DAQhk1paQ7tG5RWUAjtxW5lyfLhMoQeFgKBnm7FWB7tapGtrSSX5BX61JWzTsiWXNTCSBp9MyN-_Vn2KqfLnFf5h60b_QB2uXMDLimnTV1XT7LjvOFsUTLKnz5YH2UnIWwpFbwuy-fZEW9YQRtaHmd_VhF74jGkMgFJ3KDzN6R3HVqyMeuNTX80w5p4uJ2CAovEaQLEj603isDQTYfoF3PADGkDKho3pDVx7RbT5jeSEP2o4uixI8qawaRSBK-hNwNM8IvsmQYb8OU8n2Y_Pn38fvFlcXn1eXVxfrlQIi_jAqAtGt5AUyDTmmIJHRUNaM50CpdQtBWwpmvbknGm6pozgawQrS5bUYDS_DR7s6-7sy7IWcYgeU7z1EAwlojVnugcbOXOmx78jXRg5G3A-bUEH42yKLskvKp4W4hcFXlX18hFo2md111FK5y6vZu7efdrxBBlb4JCa2FAN85tBResSOjbf9DHLzdT6_QS0gzaxaT2VFSe17QSvEpfopaPUGl02BuVLKNNih8kvD9ISEzE67iGMQS5-vb1_9mrn4fs2QN2g2DjJjg7Ti8eDsFiDyrvQvCo74XPqZwcf6eGnBwvZ8entNezaGPbY3efdGdx_hesqfyp</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3101516522</pqid></control><display><type>article</type><title>Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Public Library of Science (PLoS) Journals Open Access</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Uto, Masaki ; Tsuruta, Jun ; Araki, Kouji ; Ueno, Maomi</creator><contributor>Cilar Budler, Leona</contributor><creatorcontrib>Uto, Masaki ; Tsuruta, Jun ; Araki, Kouji ; Ueno, Maomi ; Cilar Budler, Leona</creatorcontrib><description>Objective structured clinical examinations (OSCEs) are a widely used performance assessment for medical and dental students. A common limitation of OSCEs is that the evaluation results depend on the characteristics of raters and a scoring rubric. To overcome this limitation, item response theory (IRT) models such as the many-facet Rasch model have been proposed to estimate examinee abilities while taking into account the characteristics of raters and evaluation items in a rubric. However, conventional IRT models have two impractical assumptions: constant rater severity across all evaluation items in a rubric and an equal interval rating scale among evaluation items, which can decrease model fitting and ability measurement accuracy. To resolve this problem, we propose a new IRT model that introduces two parameters: (1) a rater-item interaction parameter representing the rater severity for each evaluation item and (2) an item-specific step-difficulty parameter representing the difference in rating scales among evaluation items. We demonstrate the effectiveness of the proposed model by applying it to actual data collected from a medical interview test conducted at Tokyo Medical and Dental University as part of a post-clinical clerkship OSCE. The experimental results showed that the proposed model was well-fitted to our OSCE data and measured ability accurately. Furthermore, it provided abundant information on rater and item characteristics that conventional models cannot, helping us to better understand rater and item properties.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0309887</identifier><identifier>PMID: 39240906</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Accuracy ; Clinical Competence ; Educational aspects ; Educational Measurement - methods ; Educational research ; Evaluation ; Humans ; Interaction parameters ; Item response theory ; Methods ; Models, Theoretical ; Performance assessment ; Performance evaluation ; Periodic health examinations ; Physical diagnosis ; Rating scales ; Rubrics (Education) ; Students ; Students, Dental ; Students, Medical</subject><ispartof>PloS one, 2024-09, Vol.19 (9), p.e0309887</ispartof><rights>Copyright: © 2024 Uto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</rights><rights>COPYRIGHT 2024 Public Library of Science</rights><rights>2024 Uto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2024 Uto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c516t-aab4939a94e2ff0e6ad059af32f9396a4b7a29dbb6232c88325e245bf6b54acf3</cites><orcidid>0000-0002-9330-5158</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,781,785,865,2103,2929,23868,27926,27927</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39240906$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Cilar Budler, Leona</contributor><creatorcontrib>Uto, Masaki</creatorcontrib><creatorcontrib>Tsuruta, Jun</creatorcontrib><creatorcontrib>Araki, Kouji</creatorcontrib><creatorcontrib>Ueno, Maomi</creatorcontrib><title>Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>Objective structured clinical examinations (OSCEs) are a widely used performance assessment for medical and dental students. A common limitation of OSCEs is that the evaluation results depend on the characteristics of raters and a scoring rubric. To overcome this limitation, item response theory (IRT) models such as the many-facet Rasch model have been proposed to estimate examinee abilities while taking into account the characteristics of raters and evaluation items in a rubric. However, conventional IRT models have two impractical assumptions: constant rater severity across all evaluation items in a rubric and an equal interval rating scale among evaluation items, which can decrease model fitting and ability measurement accuracy. To resolve this problem, we propose a new IRT model that introduces two parameters: (1) a rater-item interaction parameter representing the rater severity for each evaluation item and (2) an item-specific step-difficulty parameter representing the difference in rating scales among evaluation items. We demonstrate the effectiveness of the proposed model by applying it to actual data collected from a medical interview test conducted at Tokyo Medical and Dental University as part of a post-clinical clerkship OSCE. The experimental results showed that the proposed model was well-fitted to our OSCE data and measured ability accurately. Furthermore, it provided abundant information on rater and item characteristics that conventional models cannot, helping us to better understand rater and item properties.</description><subject>Accuracy</subject><subject>Clinical Competence</subject><subject>Educational aspects</subject><subject>Educational Measurement - methods</subject><subject>Educational research</subject><subject>Evaluation</subject><subject>Humans</subject><subject>Interaction parameters</subject><subject>Item response theory</subject><subject>Methods</subject><subject>Models, Theoretical</subject><subject>Performance assessment</subject><subject>Performance evaluation</subject><subject>Periodic health examinations</subject><subject>Physical diagnosis</subject><subject>Rating scales</subject><subject>Rubrics (Education)</subject><subject>Students</subject><subject>Students, Dental</subject><subject>Students, Medical</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNqNkttq3DAQhk1paQ7tG5RWUAjtxW5lyfLhMoQeFgKBnm7FWB7tapGtrSSX5BX61JWzTsiWXNTCSBp9MyN-_Vn2KqfLnFf5h60b_QB2uXMDLimnTV1XT7LjvOFsUTLKnz5YH2UnIWwpFbwuy-fZEW9YQRtaHmd_VhF74jGkMgFJ3KDzN6R3HVqyMeuNTX80w5p4uJ2CAovEaQLEj603isDQTYfoF3PADGkDKho3pDVx7RbT5jeSEP2o4uixI8qawaRSBK-hNwNM8IvsmQYb8OU8n2Y_Pn38fvFlcXn1eXVxfrlQIi_jAqAtGt5AUyDTmmIJHRUNaM50CpdQtBWwpmvbknGm6pozgawQrS5bUYDS_DR7s6-7sy7IWcYgeU7z1EAwlojVnugcbOXOmx78jXRg5G3A-bUEH42yKLskvKp4W4hcFXlX18hFo2md111FK5y6vZu7efdrxBBlb4JCa2FAN85tBResSOjbf9DHLzdT6_QS0gzaxaT2VFSe17QSvEpfopaPUGl02BuVLKNNih8kvD9ISEzE67iGMQS5-vb1_9mrn4fs2QN2g2DjJjg7Ti8eDsFiDyrvQvCo74XPqZwcf6eGnBwvZ8entNezaGPbY3efdGdx_hesqfyp</recordid><startdate>20240906</startdate><enddate>20240906</enddate><creator>Uto, Masaki</creator><creator>Tsuruta, Jun</creator><creator>Araki, Kouji</creator><creator>Ueno, Maomi</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-9330-5158</orcidid></search><sort><creationdate>20240906</creationdate><title>Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination</title><author>Uto, Masaki ; Tsuruta, Jun ; Araki, Kouji ; Ueno, Maomi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c516t-aab4939a94e2ff0e6ad059af32f9396a4b7a29dbb6232c88325e245bf6b54acf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Clinical Competence</topic><topic>Educational aspects</topic><topic>Educational Measurement - methods</topic><topic>Educational research</topic><topic>Evaluation</topic><topic>Humans</topic><topic>Interaction parameters</topic><topic>Item response theory</topic><topic>Methods</topic><topic>Models, Theoretical</topic><topic>Performance assessment</topic><topic>Performance evaluation</topic><topic>Periodic health examinations</topic><topic>Physical diagnosis</topic><topic>Rating scales</topic><topic>Rubrics (Education)</topic><topic>Students</topic><topic>Students, Dental</topic><topic>Students, Medical</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Uto, Masaki</creatorcontrib><creatorcontrib>Tsuruta, Jun</creatorcontrib><creatorcontrib>Araki, Kouji</creatorcontrib><creatorcontrib>Ueno, Maomi</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Gale in Context : Opposing Viewpoints</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>ProQuest Nursing and Allied Health Journals</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>ProQuest Health and Medical</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Agricultural & Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>ProQuest Biological Science Journals</collection><collection>Engineering Database</collection><collection>Nursing & Allied Health Premium</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Uto, Masaki</au><au>Tsuruta, Jun</au><au>Araki, Kouji</au><au>Ueno, Maomi</au><au>Cilar Budler, Leona</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2024-09-06</date><risdate>2024</risdate><volume>19</volume><issue>9</issue><spage>e0309887</spage><pages>e0309887-</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>Objective structured clinical examinations (OSCEs) are a widely used performance assessment for medical and dental students. A common limitation of OSCEs is that the evaluation results depend on the characteristics of raters and a scoring rubric. To overcome this limitation, item response theory (IRT) models such as the many-facet Rasch model have been proposed to estimate examinee abilities while taking into account the characteristics of raters and evaluation items in a rubric. However, conventional IRT models have two impractical assumptions: constant rater severity across all evaluation items in a rubric and an equal interval rating scale among evaluation items, which can decrease model fitting and ability measurement accuracy. To resolve this problem, we propose a new IRT model that introduces two parameters: (1) a rater-item interaction parameter representing the rater severity for each evaluation item and (2) an item-specific step-difficulty parameter representing the difference in rating scales among evaluation items. We demonstrate the effectiveness of the proposed model by applying it to actual data collected from a medical interview test conducted at Tokyo Medical and Dental University as part of a post-clinical clerkship OSCE. The experimental results showed that the proposed model was well-fitted to our OSCE data and measured ability accurately. Furthermore, it provided abundant information on rater and item characteristics that conventional models cannot, helping us to better understand rater and item properties.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>39240906</pmid><doi>10.1371/journal.pone.0309887</doi><tpages>e0309887</tpages><orcidid>https://orcid.org/0000-0002-9330-5158</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1932-6203 |
ispartof | PloS one, 2024-09, Vol.19 (9), p.e0309887 |
issn | 1932-6203 1932-6203 |
language | eng |
recordid | cdi_plos_journals_3101516522 |
source | MEDLINE; DOAJ Directory of Open Access Journals; Public Library of Science (PLoS) Journals Open Access; EZB-FREE-00999 freely available EZB journals; PubMed Central; Free Full-Text Journals in Chemistry |
subjects | Accuracy Clinical Competence Educational aspects Educational Measurement - methods Educational research Evaluation Humans Interaction parameters Item response theory Methods Models, Theoretical Performance assessment Performance evaluation Periodic health examinations Physical diagnosis Rating scales Rubrics (Education) Students Students, Dental Students, Medical |
title | Item response theory model highlighting rating scale of a rubric and rater-rubric interaction in objective structured clinical examination |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T04%3A06%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Item%20response%20theory%20model%20highlighting%20rating%20scale%20of%20a%20rubric%20and%20rater-rubric%20interaction%20in%20objective%20structured%20clinical%20examination&rft.jtitle=PloS%20one&rft.au=Uto,%20Masaki&rft.date=2024-09-06&rft.volume=19&rft.issue=9&rft.spage=e0309887&rft.pages=e0309887-&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0309887&rft_dat=%3Cgale_plos_%3EA807537777%3C/gale_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3101516522&rft_id=info:pmid/39240906&rft_galeid=A807537777&rft_doaj_id=oai_doaj_org_article_d309c73b451c41d88e359f0818d707ef&rfr_iscdi=true |