Reproducible biomedical benchmarking in the cloud: lessons from crowd-sourced data challenges

Challenges are achieving broad acceptance for addressing many biomedical questions and enabling tool assessment. But ensuring that the methods evaluated are reproducible and reusable is complicated by the diversity of software architectures, input and output file formats, and computing environments....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Genome Biology 2019-09, Vol.20 (1), p.195-9, Article 195
Hauptverfasser: Ellrott, Kyle, Buchanan, Alex, Creason, Allison, Mason, Michael, Schaffter, Thomas, Hoff, Bruce, Eddy, James, Chilton, John M, Yu, Thomas, Stuart, Joshua M, Saez-Rodriguez, Julio, Stolovitzky, Gustavo, Boutros, Paul C, Guinney, Justin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Challenges are achieving broad acceptance for addressing many biomedical questions and enabling tool assessment. But ensuring that the methods evaluated are reproducible and reusable is complicated by the diversity of software architectures, input and output file formats, and computing environments. To mitigate these problems, some challenges have leveraged new virtualization and compute methods, requiring participants to submit cloud-ready software packages. We review recent data challenges with innovative approaches to model reproducibility and data sharing, and outline key lessons for improving quantitative biomedical data analysis through crowd-sourced benchmarking challenges.
ISSN:1474-760X
1474-7596
1474-760X
DOI:10.1186/s13059-019-1794-0