The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review

Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical Decision Making 2022-07, Vol.42 (5), p.684-703
Hauptverfasser: Cadham, Christopher J., Knoll, Marie, Sánchez-Romero, Luz María, Cummings, K. Michael, Douglas, Clifford E., Liber, Alex, Mendez, David, Meza, Rafael, Mistry, Ritesh, Sertkaya, Aylin, Travis, Nargiz, Levy, David T.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 703
container_issue 5
container_start_page 684
container_title Medical Decision Making
container_volume 42
creator Cadham, Christopher J.
Knoll, Marie
Sánchez-Romero, Luz María
Cummings, K. Michael
Douglas, Clifford E.
Liber, Alex
Mendez, David
Meza, Rafael
Mistry, Ritesh
Sertkaya, Aylin
Travis, Nargiz
Levy, David T.
description Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as “formal” EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as “indeterminate” methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models. Highlights We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods. Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development. There is a need to establish expert elicitation best practices and reporting guidelines.
doi_str_mv 10.1177/0272989X211053794
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9035479</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_0272989X211053794</sage_id><sourcerecordid>2585924233</sourcerecordid><originalsourceid>FETCH-LOGICAL-c438t-45d64e4014e1013d1d75923fedb13a861743e946417d5381ba54db182aee6d1a3</originalsourceid><addsrcrecordid>eNp9UV1LHDEUDUWpW9sf0BfJY19G506SmcQHQZa1ChahKvgWspO7u1lmJmuSsfXfm2WttBR8CpyvHO4h5CuUxwBNc1JWTaWkeqgASsEaxT-QCQhRFbWEhz0y2fLFVnBAPsW4LkvgSvKP5IDxWnGo5YSs71ZI7yNSv6Cz3xsMic4617pkkvMDNb0flnTq-824Q0xHf3iLncvwbRqtw0jdQC_RdGlFf2JEE9rVKT2nt88xYZ9NbYafHP76TPYXpov45fU9JPcXs7vpZXF98_1qen5dtJzJVHBha448d0UogVmwjVAVW6CdAzOyhoYzVLzm0FjBJMyN4JmSlUGsLRh2SM52uZtx3qNtcUjBdHoTXG_Cs_bG6X-Zwa300j9pVTLBG5UDvr0GBP84Yky6d7HFrjMD-jHqSsjciFeMZSnspG3wMQZcvH0Dpd5upP_bKHuO_u735vgzShYc7wTRLFGv_Rjy2eM7iS-Vb5sS</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2585924233</pqid></control><display><type>article</type><title>The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review</title><source>Access via SAGE</source><source>MEDLINE</source><creator>Cadham, Christopher J. ; Knoll, Marie ; Sánchez-Romero, Luz María ; Cummings, K. Michael ; Douglas, Clifford E. ; Liber, Alex ; Mendez, David ; Meza, Rafael ; Mistry, Ritesh ; Sertkaya, Aylin ; Travis, Nargiz ; Levy, David T.</creator><creatorcontrib>Cadham, Christopher J. ; Knoll, Marie ; Sánchez-Romero, Luz María ; Cummings, K. Michael ; Douglas, Clifford E. ; Liber, Alex ; Mendez, David ; Meza, Rafael ; Mistry, Ritesh ; Sertkaya, Aylin ; Travis, Nargiz ; Levy, David T.</creatorcontrib><description>Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as “formal” EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as “indeterminate” methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models. Highlights We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods. Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development. There is a need to establish expert elicitation best practices and reporting guidelines.</description><identifier>ISSN: 0272-989X</identifier><identifier>ISSN: 1552-681X</identifier><identifier>EISSN: 1552-681X</identifier><identifier>DOI: 10.1177/0272989X211053794</identifier><identifier>PMID: 34694168</identifier><language>eng</language><publisher>Los Angeles, CA: SAGE Publications</publisher><subject>Computer Simulation ; Cost-Benefit Analysis ; Humans ; Probability ; Research Design</subject><ispartof>Medical Decision Making, 2022-07, Vol.42 (5), p.684-703</ispartof><rights>The Author(s) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c438t-45d64e4014e1013d1d75923fedb13a861743e946417d5381ba54db182aee6d1a3</citedby><cites>FETCH-LOGICAL-c438t-45d64e4014e1013d1d75923fedb13a861743e946417d5381ba54db182aee6d1a3</cites><orcidid>0000-0001-6305-8259 ; 0000-0001-9531-2733</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/0272989X211053794$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/0272989X211053794$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>230,313,314,780,784,792,885,21819,27922,27924,27925,43621,43622</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34694168$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Cadham, Christopher J.</creatorcontrib><creatorcontrib>Knoll, Marie</creatorcontrib><creatorcontrib>Sánchez-Romero, Luz María</creatorcontrib><creatorcontrib>Cummings, K. Michael</creatorcontrib><creatorcontrib>Douglas, Clifford E.</creatorcontrib><creatorcontrib>Liber, Alex</creatorcontrib><creatorcontrib>Mendez, David</creatorcontrib><creatorcontrib>Meza, Rafael</creatorcontrib><creatorcontrib>Mistry, Ritesh</creatorcontrib><creatorcontrib>Sertkaya, Aylin</creatorcontrib><creatorcontrib>Travis, Nargiz</creatorcontrib><creatorcontrib>Levy, David T.</creatorcontrib><title>The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review</title><title>Medical Decision Making</title><addtitle>Med Decis Making</addtitle><description>Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as “formal” EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as “indeterminate” methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models. Highlights We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods. Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development. There is a need to establish expert elicitation best practices and reporting guidelines.</description><subject>Computer Simulation</subject><subject>Cost-Benefit Analysis</subject><subject>Humans</subject><subject>Probability</subject><subject>Research Design</subject><issn>0272-989X</issn><issn>1552-681X</issn><issn>1552-681X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9UV1LHDEUDUWpW9sf0BfJY19G506SmcQHQZa1ChahKvgWspO7u1lmJmuSsfXfm2WttBR8CpyvHO4h5CuUxwBNc1JWTaWkeqgASsEaxT-QCQhRFbWEhz0y2fLFVnBAPsW4LkvgSvKP5IDxWnGo5YSs71ZI7yNSv6Cz3xsMic4617pkkvMDNb0flnTq-824Q0xHf3iLncvwbRqtw0jdQC_RdGlFf2JEE9rVKT2nt88xYZ9NbYafHP76TPYXpov45fU9JPcXs7vpZXF98_1qen5dtJzJVHBha448d0UogVmwjVAVW6CdAzOyhoYzVLzm0FjBJMyN4JmSlUGsLRh2SM52uZtx3qNtcUjBdHoTXG_Cs_bG6X-Zwa300j9pVTLBG5UDvr0GBP84Yky6d7HFrjMD-jHqSsjciFeMZSnspG3wMQZcvH0Dpd5upP_bKHuO_u735vgzShYc7wTRLFGv_Rjy2eM7iS-Vb5sS</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Cadham, Christopher J.</creator><creator>Knoll, Marie</creator><creator>Sánchez-Romero, Luz María</creator><creator>Cummings, K. Michael</creator><creator>Douglas, Clifford E.</creator><creator>Liber, Alex</creator><creator>Mendez, David</creator><creator>Meza, Rafael</creator><creator>Mistry, Ritesh</creator><creator>Sertkaya, Aylin</creator><creator>Travis, Nargiz</creator><creator>Levy, David T.</creator><general>SAGE Publications</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-6305-8259</orcidid><orcidid>https://orcid.org/0000-0001-9531-2733</orcidid></search><sort><creationdate>20220701</creationdate><title>The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review</title><author>Cadham, Christopher J. ; Knoll, Marie ; Sánchez-Romero, Luz María ; Cummings, K. Michael ; Douglas, Clifford E. ; Liber, Alex ; Mendez, David ; Meza, Rafael ; Mistry, Ritesh ; Sertkaya, Aylin ; Travis, Nargiz ; Levy, David T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c438t-45d64e4014e1013d1d75923fedb13a861743e946417d5381ba54db182aee6d1a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Simulation</topic><topic>Cost-Benefit Analysis</topic><topic>Humans</topic><topic>Probability</topic><topic>Research Design</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cadham, Christopher J.</creatorcontrib><creatorcontrib>Knoll, Marie</creatorcontrib><creatorcontrib>Sánchez-Romero, Luz María</creatorcontrib><creatorcontrib>Cummings, K. Michael</creatorcontrib><creatorcontrib>Douglas, Clifford E.</creatorcontrib><creatorcontrib>Liber, Alex</creatorcontrib><creatorcontrib>Mendez, David</creatorcontrib><creatorcontrib>Meza, Rafael</creatorcontrib><creatorcontrib>Mistry, Ritesh</creatorcontrib><creatorcontrib>Sertkaya, Aylin</creatorcontrib><creatorcontrib>Travis, Nargiz</creatorcontrib><creatorcontrib>Levy, David T.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Medical Decision Making</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cadham, Christopher J.</au><au>Knoll, Marie</au><au>Sánchez-Romero, Luz María</au><au>Cummings, K. Michael</au><au>Douglas, Clifford E.</au><au>Liber, Alex</au><au>Mendez, David</au><au>Meza, Rafael</au><au>Mistry, Ritesh</au><au>Sertkaya, Aylin</au><au>Travis, Nargiz</au><au>Levy, David T.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review</atitle><jtitle>Medical Decision Making</jtitle><addtitle>Med Decis Making</addtitle><date>2022-07-01</date><risdate>2022</risdate><volume>42</volume><issue>5</issue><spage>684</spage><epage>703</epage><pages>684-703</pages><issn>0272-989X</issn><issn>1552-681X</issn><eissn>1552-681X</eissn><abstract>Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as “formal” EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as “indeterminate” methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models. Highlights We find extensive use of expert elicitation for the development of model input parameters, but most studies do not provide adequate details of their elicitation methods. Lack of reporting hinders greater discussion of the merits and challenges of using expert elicitation for model input parameter development. There is a need to establish expert elicitation best practices and reporting guidelines.</abstract><cop>Los Angeles, CA</cop><pub>SAGE Publications</pub><pmid>34694168</pmid><doi>10.1177/0272989X211053794</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0001-6305-8259</orcidid><orcidid>https://orcid.org/0000-0001-9531-2733</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0272-989X
ispartof Medical Decision Making, 2022-07, Vol.42 (5), p.684-703
issn 0272-989X
1552-681X
1552-681X
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9035479
source Access via SAGE; MEDLINE
subjects Computer Simulation
Cost-Benefit Analysis
Humans
Probability
Research Design
title The Use of Expert Elicitation among Computational Modeling Studies in Health Research: A Systematic Review
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T11%3A32%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Use%20of%20Expert%20Elicitation%20among%20Computational%20Modeling%20Studies%20in%20Health%20Research:%20A%20Systematic%20Review&rft.jtitle=Medical%20Decision%20Making&rft.au=Cadham,%20Christopher%20J.&rft.date=2022-07-01&rft.volume=42&rft.issue=5&rft.spage=684&rft.epage=703&rft.pages=684-703&rft.issn=0272-989X&rft.eissn=1552-681X&rft_id=info:doi/10.1177/0272989X211053794&rft_dat=%3Cproquest_pubme%3E2585924233%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2585924233&rft_id=info:pmid/34694168&rft_sage_id=10.1177_0272989X211053794&rfr_iscdi=true