Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python
Computational systems biology analyses typically make use of multiple software and their dependencies, which are often run across heterogeneous compute environments. This can introduce differences in performance and reproducibility. Capturing metadata (e.g. package versions, GPU model) currently req...
Gespeichert in:
Veröffentlicht in: | Bioinformatics (Oxford, England) England), 2022-10, Vol.38 (20), p.4823-4825 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4825 |
---|---|
container_issue | 20 |
container_start_page | 4823 |
container_title | Bioinformatics (Oxford, England) |
container_volume | 38 |
creator | Lubbock, Alexander L R Lopez, Carlos F |
description | Computational systems biology analyses typically make use of multiple software and their dependencies, which are often run across heterogeneous compute environments. This can introduce differences in performance and reproducibility. Capturing metadata (e.g. package versions, GPU model) currently requires repetitious code and is difficult to store centrally for analysis. Even where virtual environments and containers are used, updates over time mean that versioning metadata should still be captured within analysis pipelines to guarantee reproducibility.
Microbench is a simple and extensible Python package to automate metadata capture to a file or Redis database. Captured metadata can include execution time, software package versions, environment variables, hardware information, Python version and more, with plugins. We present three case studies demonstrating Microbench usage to benchmark code execution and examine environment metadata for reproducibility purposes.
Install from the Python Package Index using pip install microbench. Source code is available from https://github.com/alubbock/microbench.
Supplementary data are available at Bioinformatics online. |
doi_str_mv | 10.1093/bioinformatics/btac580 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9563693</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2706181794</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-8827fd5f866972942baedf5bbf5d351fc1db1eb19b2b6cbccee3789cf6ca42313</originalsourceid><addsrcrecordid>eNpVUUtrGzEQFiUldh5_weiYi2tp5dXu5hAIIW0KKe2hOQs9RraaXcmRtIX991Vrx9SnGZjvxXwILSj5REnHVsoF522Ig8xOp5XKUtct-YDmlPFmuW4pPTvuhM3QRUq_CCE1qfk5mjFe9pY1c7T95nQMCrze3mI55lAEweABsjQySzxILzcwgM-4uOE0pQxDwsW-D5sJ_yMOMr46v8HSGxxhF4MZtVOud3nCzuMfU94Gf4U-WtknuD7MS_Ty-fHnw9Py-fuXrw_3z0vNOM3Ltq0aa2rbct41VbeulARja6VsbVhNraZGUVC0U5XiWmkNwJq205Zrua4YZZfobq-7G9UARpfkUfZiF12JOYkgnTi9eLcVm_BbdDVnvGNF4OYgEMPbCCmLwSUNfS89hDGJqiGctrTp1gXK99DywpQi2KMNJeJvTeK0JnGoqRAX_4c80t57YX8AR-WZnw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2706181794</pqid></control><display><type>article</type><title>Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python</title><source>MEDLINE</source><source>Oxford Journals Open Access Collection</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Alma/SFX Local Collection</source><creator>Lubbock, Alexander L R ; Lopez, Carlos F</creator><contributor>Kelso, Janet</contributor><creatorcontrib>Lubbock, Alexander L R ; Lopez, Carlos F ; Kelso, Janet</creatorcontrib><description>Computational systems biology analyses typically make use of multiple software and their dependencies, which are often run across heterogeneous compute environments. This can introduce differences in performance and reproducibility. Capturing metadata (e.g. package versions, GPU model) currently requires repetitious code and is difficult to store centrally for analysis. Even where virtual environments and containers are used, updates over time mean that versioning metadata should still be captured within analysis pipelines to guarantee reproducibility.
Microbench is a simple and extensible Python package to automate metadata capture to a file or Redis database. Captured metadata can include execution time, software package versions, environment variables, hardware information, Python version and more, with plugins. We present three case studies demonstrating Microbench usage to benchmark code execution and examine environment metadata for reproducibility purposes.
Install from the Python Package Index using pip install microbench. Source code is available from https://github.com/alubbock/microbench.
Supplementary data are available at Bioinformatics online.</description><identifier>ISSN: 1367-4803</identifier><identifier>EISSN: 1367-4811</identifier><identifier>DOI: 10.1093/bioinformatics/btac580</identifier><identifier>PMID: 36000837</identifier><language>eng</language><publisher>England: Oxford University Press</publisher><subject>Applications Notes ; Benchmarking ; Metadata ; Reproducibility of Results ; Software ; Systems Biology</subject><ispartof>Bioinformatics (Oxford, England), 2022-10, Vol.38 (20), p.4823-4825</ispartof><rights>The Author(s) 2022. Published by Oxford University Press.</rights><rights>The Author(s) 2022. Published by Oxford University Press. 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c361t-8827fd5f866972942baedf5bbf5d351fc1db1eb19b2b6cbccee3789cf6ca42313</cites><orcidid>0000-0002-6950-8908 ; 0000-0003-3668-7468</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563693/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563693/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,315,728,781,785,886,27926,27927,53793,53795</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36000837$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Kelso, Janet</contributor><creatorcontrib>Lubbock, Alexander L R</creatorcontrib><creatorcontrib>Lopez, Carlos F</creatorcontrib><title>Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python</title><title>Bioinformatics (Oxford, England)</title><addtitle>Bioinformatics</addtitle><description>Computational systems biology analyses typically make use of multiple software and their dependencies, which are often run across heterogeneous compute environments. This can introduce differences in performance and reproducibility. Capturing metadata (e.g. package versions, GPU model) currently requires repetitious code and is difficult to store centrally for analysis. Even where virtual environments and containers are used, updates over time mean that versioning metadata should still be captured within analysis pipelines to guarantee reproducibility.
Microbench is a simple and extensible Python package to automate metadata capture to a file or Redis database. Captured metadata can include execution time, software package versions, environment variables, hardware information, Python version and more, with plugins. We present three case studies demonstrating Microbench usage to benchmark code execution and examine environment metadata for reproducibility purposes.
Install from the Python Package Index using pip install microbench. Source code is available from https://github.com/alubbock/microbench.
Supplementary data are available at Bioinformatics online.</description><subject>Applications Notes</subject><subject>Benchmarking</subject><subject>Metadata</subject><subject>Reproducibility of Results</subject><subject>Software</subject><subject>Systems Biology</subject><issn>1367-4803</issn><issn>1367-4811</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpVUUtrGzEQFiUldh5_weiYi2tp5dXu5hAIIW0KKe2hOQs9RraaXcmRtIX991Vrx9SnGZjvxXwILSj5REnHVsoF522Ig8xOp5XKUtct-YDmlPFmuW4pPTvuhM3QRUq_CCE1qfk5mjFe9pY1c7T95nQMCrze3mI55lAEweABsjQySzxILzcwgM-4uOE0pQxDwsW-D5sJ_yMOMr46v8HSGxxhF4MZtVOud3nCzuMfU94Gf4U-WtknuD7MS_Ty-fHnw9Py-fuXrw_3z0vNOM3Ltq0aa2rbct41VbeulARja6VsbVhNraZGUVC0U5XiWmkNwJq205Zrua4YZZfobq-7G9UARpfkUfZiF12JOYkgnTi9eLcVm_BbdDVnvGNF4OYgEMPbCCmLwSUNfS89hDGJqiGctrTp1gXK99DywpQi2KMNJeJvTeK0JnGoqRAX_4c80t57YX8AR-WZnw</recordid><startdate>20221014</startdate><enddate>20221014</enddate><creator>Lubbock, Alexander L R</creator><creator>Lopez, Carlos F</creator><general>Oxford University Press</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-6950-8908</orcidid><orcidid>https://orcid.org/0000-0003-3668-7468</orcidid></search><sort><creationdate>20221014</creationdate><title>Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python</title><author>Lubbock, Alexander L R ; Lopez, Carlos F</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-8827fd5f866972942baedf5bbf5d351fc1db1eb19b2b6cbccee3789cf6ca42313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Applications Notes</topic><topic>Benchmarking</topic><topic>Metadata</topic><topic>Reproducibility of Results</topic><topic>Software</topic><topic>Systems Biology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lubbock, Alexander L R</creatorcontrib><creatorcontrib>Lopez, Carlos F</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Bioinformatics (Oxford, England)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lubbock, Alexander L R</au><au>Lopez, Carlos F</au><au>Kelso, Janet</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python</atitle><jtitle>Bioinformatics (Oxford, England)</jtitle><addtitle>Bioinformatics</addtitle><date>2022-10-14</date><risdate>2022</risdate><volume>38</volume><issue>20</issue><spage>4823</spage><epage>4825</epage><pages>4823-4825</pages><issn>1367-4803</issn><eissn>1367-4811</eissn><abstract>Computational systems biology analyses typically make use of multiple software and their dependencies, which are often run across heterogeneous compute environments. This can introduce differences in performance and reproducibility. Capturing metadata (e.g. package versions, GPU model) currently requires repetitious code and is difficult to store centrally for analysis. Even where virtual environments and containers are used, updates over time mean that versioning metadata should still be captured within analysis pipelines to guarantee reproducibility.
Microbench is a simple and extensible Python package to automate metadata capture to a file or Redis database. Captured metadata can include execution time, software package versions, environment variables, hardware information, Python version and more, with plugins. We present three case studies demonstrating Microbench usage to benchmark code execution and examine environment metadata for reproducibility purposes.
Install from the Python Package Index using pip install microbench. Source code is available from https://github.com/alubbock/microbench.
Supplementary data are available at Bioinformatics online.</abstract><cop>England</cop><pub>Oxford University Press</pub><pmid>36000837</pmid><doi>10.1093/bioinformatics/btac580</doi><tpages>3</tpages><orcidid>https://orcid.org/0000-0002-6950-8908</orcidid><orcidid>https://orcid.org/0000-0003-3668-7468</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1367-4803 |
ispartof | Bioinformatics (Oxford, England), 2022-10, Vol.38 (20), p.4823-4825 |
issn | 1367-4803 1367-4811 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9563693 |
source | MEDLINE; Oxford Journals Open Access Collection; EZB-FREE-00999 freely available EZB journals; PubMed Central; Alma/SFX Local Collection |
subjects | Applications Notes Benchmarking Metadata Reproducibility of Results Software Systems Biology |
title | Microbench: automated metadata management for systems biology benchmarking and reproducibility in Python |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T04%3A10%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Microbench:%20automated%20metadata%20management%20for%20systems%20biology%20benchmarking%20and%20reproducibility%20in%20Python&rft.jtitle=Bioinformatics%20(Oxford,%20England)&rft.au=Lubbock,%20Alexander%20L%20R&rft.date=2022-10-14&rft.volume=38&rft.issue=20&rft.spage=4823&rft.epage=4825&rft.pages=4823-4825&rft.issn=1367-4803&rft.eissn=1367-4811&rft_id=info:doi/10.1093/bioinformatics/btac580&rft_dat=%3Cproquest_pubme%3E2706181794%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2706181794&rft_id=info:pmid/36000837&rfr_iscdi=true |