Implementing a decision-theoretic design in clinical trials: Why and how?
This paper addresses two main questions: first, why should Bayesian and other innovative, data‐dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial? Clinical trials a...
Gespeichert in:
Veröffentlicht in: | Statistics in medicine 2007-11, Vol.26 (27), p.4939-4957 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4957 |
---|---|
container_issue | 27 |
container_start_page | 4939 |
container_title | Statistics in medicine |
container_volume | 26 |
creator | Palmer, Christopher R. Shahumyan, Harutyun |
description | This paper addresses two main questions: first, why should Bayesian and other innovative, data‐dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial?
Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under‐used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27–36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user‐friendly software to allow study simulations.
To encourage implementation, a new C++ program called ‘Daniel’ is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision‐theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user‐specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm ‘computer patients’ instead of real ones.
Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision‐theoretic procedure would treat a mean 68 pairs of patients ( |
doi_str_mv | 10.1002/sim.2949 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_68405501</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>68405501</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3849-b7f2fd4588979a2120a1f1d9b60c69538d2c7cb136643a42ca1f73625e5eee633</originalsourceid><addsrcrecordid>eNp10F1LHDEUBuBQKnVrhf4CGXohvRnNxySZeCOyqF1Ztxdd2dKbkM2ccaMzmTWZRfffN7JDBaFXB855eDm8CH0l-IRgTE-ja0-oKtQHNCJYyRxTXn5EI0ylzIUkfB99jvEBY0I4lZ_QPpG8pCUmIzSZtOsGWvC98_eZySqwLrrO5_0KugC9s2kV3b3PnM9s47yzpsn64EwTz7LFapsZX2Wr7vn8C9qr0xIOh3mA7q4u5-Mf-fTn9WR8Mc0tKwuVL2VN66rgZamkMpRQbEhNKrUU2ArFWVlRK-2SMCEKZgpq01kyQTlwABCMHaDjXe46dE8biL1uXbTQNMZDt4lalAXmHJMEv72DD90m-PSbppQRJhWVCX3fIRu6GAPUeh1ca8JWE6xfu9WpW_3abaJHQ95m2UL1BocyE8h34Nk1sP1vkP41uR0CB-9iDy__vAmPWkgmuV7MrvWY_1a3s_mN_sP-AvUpkEM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>223137927</pqid></control><display><type>article</type><title>Implementing a decision-theoretic design in clinical trials: Why and how?</title><source>MEDLINE</source><source>Access via Wiley Online Library</source><creator>Palmer, Christopher R. ; Shahumyan, Harutyun</creator><creatorcontrib>Palmer, Christopher R. ; Shahumyan, Harutyun</creatorcontrib><description>This paper addresses two main questions: first, why should Bayesian and other innovative, data‐dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial?
Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under‐used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27–36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user‐friendly software to allow study simulations.
To encourage implementation, a new C++ program called ‘Daniel’ is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision‐theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user‐specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm ‘computer patients’ instead of real ones.
Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision‐theoretic procedure would treat a mean 68 pairs of patients (SD 37) before correctly identifying the better treatment 96.7 per cent of the time, an error rate of 3.3 per cent. Having made a recommendation based on these patients, the remaining, on average 364 individuals, could either be given the indicated treatment, knowing its choice is optimal for the chosen horizon, or, alternatively, they could be entered into another, separate clinical trial. For comparison, a fixed sample size trial, with standard 5 per cent level of significance and 80 per cent power to detect a 10 per cent difference, requires treating over 700 patients in two groups, with the half allocated to inferior treatment considerably outnumbering the 68 expected under the decision‐theoretic design, and the overall number simply too high for realistic application.
In brief, the keys to answering the above ‘why?’ and ‘how?’ questions are ethics and software, respectively. Wider implications, both pros and cons, of implementing the particular method described will be discussed, with the overall conclusion that, where appropriate, clinical trials are now ready to undergo modernization from the agricultural age to the information age. Copyright © 2007 John Wiley & Sons, Ltd.</description><identifier>ISSN: 0277-6715</identifier><identifier>EISSN: 1097-0258</identifier><identifier>DOI: 10.1002/sim.2949</identifier><identifier>PMID: 17582801</identifier><identifier>CODEN: SMEDDA</identifier><language>eng</language><publisher>Chichester, UK: John Wiley & Sons, Ltd</publisher><subject>Bayes Theorem ; Bayesian analysis ; Clinical trials ; Computer simulation ; Critical Care - methods ; data-dependent design ; Decision Making - ethics ; Decision Theory ; Design ; ethics ; Humans ; Intubation - methods ; Modernization ; Oxygen - pharmacology ; pragmatic trial ; Randomized Controlled Trials as Topic - ethics ; Randomized Controlled Trials as Topic - methods ; Software</subject><ispartof>Statistics in medicine, 2007-11, Vol.26 (27), p.4939-4957</ispartof><rights>Copyright © 2007 John Wiley & Sons, Ltd.</rights><rights>Copyright John Wiley and Sons, Limited Nov 30, 2007</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3849-b7f2fd4588979a2120a1f1d9b60c69538d2c7cb136643a42ca1f73625e5eee633</citedby><cites>FETCH-LOGICAL-c3849-b7f2fd4588979a2120a1f1d9b60c69538d2c7cb136643a42ca1f73625e5eee633</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fsim.2949$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fsim.2949$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>315,781,785,1418,27929,27930,45579,45580</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/17582801$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Palmer, Christopher R.</creatorcontrib><creatorcontrib>Shahumyan, Harutyun</creatorcontrib><title>Implementing a decision-theoretic design in clinical trials: Why and how?</title><title>Statistics in medicine</title><addtitle>Statist. Med</addtitle><description>This paper addresses two main questions: first, why should Bayesian and other innovative, data‐dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial?
Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under‐used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27–36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user‐friendly software to allow study simulations.
To encourage implementation, a new C++ program called ‘Daniel’ is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision‐theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user‐specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm ‘computer patients’ instead of real ones.
Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision‐theoretic procedure would treat a mean 68 pairs of patients (SD 37) before correctly identifying the better treatment 96.7 per cent of the time, an error rate of 3.3 per cent. Having made a recommendation based on these patients, the remaining, on average 364 individuals, could either be given the indicated treatment, knowing its choice is optimal for the chosen horizon, or, alternatively, they could be entered into another, separate clinical trial. For comparison, a fixed sample size trial, with standard 5 per cent level of significance and 80 per cent power to detect a 10 per cent difference, requires treating over 700 patients in two groups, with the half allocated to inferior treatment considerably outnumbering the 68 expected under the decision‐theoretic design, and the overall number simply too high for realistic application.
In brief, the keys to answering the above ‘why?’ and ‘how?’ questions are ethics and software, respectively. Wider implications, both pros and cons, of implementing the particular method described will be discussed, with the overall conclusion that, where appropriate, clinical trials are now ready to undergo modernization from the agricultural age to the information age. Copyright © 2007 John Wiley & Sons, Ltd.</description><subject>Bayes Theorem</subject><subject>Bayesian analysis</subject><subject>Clinical trials</subject><subject>Computer simulation</subject><subject>Critical Care - methods</subject><subject>data-dependent design</subject><subject>Decision Making - ethics</subject><subject>Decision Theory</subject><subject>Design</subject><subject>ethics</subject><subject>Humans</subject><subject>Intubation - methods</subject><subject>Modernization</subject><subject>Oxygen - pharmacology</subject><subject>pragmatic trial</subject><subject>Randomized Controlled Trials as Topic - ethics</subject><subject>Randomized Controlled Trials as Topic - methods</subject><subject>Software</subject><issn>0277-6715</issn><issn>1097-0258</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2007</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp10F1LHDEUBuBQKnVrhf4CGXohvRnNxySZeCOyqF1Ztxdd2dKbkM2ccaMzmTWZRfffN7JDBaFXB855eDm8CH0l-IRgTE-ja0-oKtQHNCJYyRxTXn5EI0ylzIUkfB99jvEBY0I4lZ_QPpG8pCUmIzSZtOsGWvC98_eZySqwLrrO5_0KugC9s2kV3b3PnM9s47yzpsn64EwTz7LFapsZX2Wr7vn8C9qr0xIOh3mA7q4u5-Mf-fTn9WR8Mc0tKwuVL2VN66rgZamkMpRQbEhNKrUU2ArFWVlRK-2SMCEKZgpq01kyQTlwABCMHaDjXe46dE8biL1uXbTQNMZDt4lalAXmHJMEv72DD90m-PSbppQRJhWVCX3fIRu6GAPUeh1ca8JWE6xfu9WpW_3abaJHQ95m2UL1BocyE8h34Nk1sP1vkP41uR0CB-9iDy__vAmPWkgmuV7MrvWY_1a3s_mN_sP-AvUpkEM</recordid><startdate>20071130</startdate><enddate>20071130</enddate><creator>Palmer, Christopher R.</creator><creator>Shahumyan, Harutyun</creator><general>John Wiley & Sons, Ltd</general><general>Wiley Subscription Services, Inc</general><scope>BSCLL</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>K9.</scope><scope>7X8</scope></search><sort><creationdate>20071130</creationdate><title>Implementing a decision-theoretic design in clinical trials: Why and how?</title><author>Palmer, Christopher R. ; Shahumyan, Harutyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3849-b7f2fd4588979a2120a1f1d9b60c69538d2c7cb136643a42ca1f73625e5eee633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2007</creationdate><topic>Bayes Theorem</topic><topic>Bayesian analysis</topic><topic>Clinical trials</topic><topic>Computer simulation</topic><topic>Critical Care - methods</topic><topic>data-dependent design</topic><topic>Decision Making - ethics</topic><topic>Decision Theory</topic><topic>Design</topic><topic>ethics</topic><topic>Humans</topic><topic>Intubation - methods</topic><topic>Modernization</topic><topic>Oxygen - pharmacology</topic><topic>pragmatic trial</topic><topic>Randomized Controlled Trials as Topic - ethics</topic><topic>Randomized Controlled Trials as Topic - methods</topic><topic>Software</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Palmer, Christopher R.</creatorcontrib><creatorcontrib>Shahumyan, Harutyun</creatorcontrib><collection>Istex</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>MEDLINE - Academic</collection><jtitle>Statistics in medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Palmer, Christopher R.</au><au>Shahumyan, Harutyun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Implementing a decision-theoretic design in clinical trials: Why and how?</atitle><jtitle>Statistics in medicine</jtitle><addtitle>Statist. Med</addtitle><date>2007-11-30</date><risdate>2007</risdate><volume>26</volume><issue>27</issue><spage>4939</spage><epage>4957</epage><pages>4939-4957</pages><issn>0277-6715</issn><eissn>1097-0258</eissn><coden>SMEDDA</coden><abstract>This paper addresses two main questions: first, why should Bayesian and other innovative, data‐dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial?
Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under‐used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27–36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user‐friendly software to allow study simulations.
To encourage implementation, a new C++ program called ‘Daniel’ is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision‐theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user‐specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm ‘computer patients’ instead of real ones.
Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision‐theoretic procedure would treat a mean 68 pairs of patients (SD 37) before correctly identifying the better treatment 96.7 per cent of the time, an error rate of 3.3 per cent. Having made a recommendation based on these patients, the remaining, on average 364 individuals, could either be given the indicated treatment, knowing its choice is optimal for the chosen horizon, or, alternatively, they could be entered into another, separate clinical trial. For comparison, a fixed sample size trial, with standard 5 per cent level of significance and 80 per cent power to detect a 10 per cent difference, requires treating over 700 patients in two groups, with the half allocated to inferior treatment considerably outnumbering the 68 expected under the decision‐theoretic design, and the overall number simply too high for realistic application.
In brief, the keys to answering the above ‘why?’ and ‘how?’ questions are ethics and software, respectively. Wider implications, both pros and cons, of implementing the particular method described will be discussed, with the overall conclusion that, where appropriate, clinical trials are now ready to undergo modernization from the agricultural age to the information age. Copyright © 2007 John Wiley & Sons, Ltd.</abstract><cop>Chichester, UK</cop><pub>John Wiley & Sons, Ltd</pub><pmid>17582801</pmid><doi>10.1002/sim.2949</doi><tpages>19</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0277-6715 |
ispartof | Statistics in medicine, 2007-11, Vol.26 (27), p.4939-4957 |
issn | 0277-6715 1097-0258 |
language | eng |
recordid | cdi_proquest_miscellaneous_68405501 |
source | MEDLINE; Access via Wiley Online Library |
subjects | Bayes Theorem Bayesian analysis Clinical trials Computer simulation Critical Care - methods data-dependent design Decision Making - ethics Decision Theory Design ethics Humans Intubation - methods Modernization Oxygen - pharmacology pragmatic trial Randomized Controlled Trials as Topic - ethics Randomized Controlled Trials as Topic - methods Software |
title | Implementing a decision-theoretic design in clinical trials: Why and how? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-11T13%3A26%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Implementing%20a%20decision-theoretic%20design%20in%20clinical%20trials:%20Why%20and%20how?&rft.jtitle=Statistics%20in%20medicine&rft.au=Palmer,%20Christopher%20R.&rft.date=2007-11-30&rft.volume=26&rft.issue=27&rft.spage=4939&rft.epage=4957&rft.pages=4939-4957&rft.issn=0277-6715&rft.eissn=1097-0258&rft.coden=SMEDDA&rft_id=info:doi/10.1002/sim.2949&rft_dat=%3Cproquest_cross%3E68405501%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=223137927&rft_id=info:pmid/17582801&rfr_iscdi=true |