The difference of model robustness assessment using cross‐validation and bootstrap methods
The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validat...
Gespeichert in:
Veröffentlicht in: | Journal of chemometrics 2024-06, Vol.38 (6), p.n/a |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | n/a |
---|---|
container_issue | 6 |
container_start_page | |
container_title | Journal of chemometrics |
container_volume | 38 |
creator | Lasfar, Rita Tóth, Gergely |
description | The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validation: bootstrap and cross‐validation. We compared these validation metrics by checking their sample size dependence, rank correlations to other metrics and uncertainty. We used modeling methods from multivariate linear regression to artificial neural network on 14 open access datasets. We found that the metrics provide similar sample size dependence and correlation to other validation parameters. The individual uncertainty originating from the calculation recipes of the metrics is much smaller for both ways than the part caused by the selection of the training set or the training/test split. We concluded that the metrics of the two techniques are interchangeable, but the interpretation of cross‐validation parameters is easier according to their similar range to goodness‐of‐fit and prediction metrics. Furthermore, the variance originating from the random elements of the calculation of cross‐validation metrics is slightly smaller than those of bootstrap ones, if equal calculation load is applied.
The two methods provide close to the same information on robustness, but we suggest to use cross‐validation, because: a) Bootstrap values are outliers within the metrics for other validation tasks as goodness‐of‐fit or predictivity. b) The uncertainty of the robustness calculation is smaller for cross‐validation at equal calculation load. |
doi_str_mv | 10.1002/cem.3530 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3066183037</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3066183037</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2540-abb95553d547fcd0a64ac1b36c8aee8aaa3e8cb717d09b1f00bb141458c4b253</originalsourceid><addsrcrecordid>eNp10L1OwzAQB3ALgUQpSDyCJRaWlHMcJ86IqvIhFbF0YECy_HGhqZq42AmoG4_AM_IkpC0r03-4n-5Of0IuGUwYQHpjsZlwweGIjBiUZcJS-XJMRiBlnpRc8lNyFuMKYJjxbEReF0ukrq4qDNhapL6ijXe4psGbPnYtxkh1jEM02Ha0j3X7Rm3wMf58fX_ode10V_uW6tZR430Xu6A3tMFu6V08JyeVXke8-MsxWdzNFtOHZP58_zi9nSc2FRkk2phSCMGdyIrKOtB5pi0zPLdSI0qtNUdpTcEKB6VhFYAxLGOZkDYzqeBjcnVYuwn-vcfYqZXvQztcVBzynEkOvBjU9UHtvw9YqU2oGx22ioHaVaeG6tSuuoEmB_pZr3H7r1PT2dPe_wIVy3JP</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3066183037</pqid></control><display><type>article</type><title>The difference of model robustness assessment using cross‐validation and bootstrap methods</title><source>Wiley Online Library - AutoHoldings Journals</source><creator>Lasfar, Rita ; Tóth, Gergely</creator><creatorcontrib>Lasfar, Rita ; Tóth, Gergely</creatorcontrib><description>The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validation: bootstrap and cross‐validation. We compared these validation metrics by checking their sample size dependence, rank correlations to other metrics and uncertainty. We used modeling methods from multivariate linear regression to artificial neural network on 14 open access datasets. We found that the metrics provide similar sample size dependence and correlation to other validation parameters. The individual uncertainty originating from the calculation recipes of the metrics is much smaller for both ways than the part caused by the selection of the training set or the training/test split. We concluded that the metrics of the two techniques are interchangeable, but the interpretation of cross‐validation parameters is easier according to their similar range to goodness‐of‐fit and prediction metrics. Furthermore, the variance originating from the random elements of the calculation of cross‐validation metrics is slightly smaller than those of bootstrap ones, if equal calculation load is applied.
The two methods provide close to the same information on robustness, but we suggest to use cross‐validation, because: a) Bootstrap values are outliers within the metrics for other validation tasks as goodness‐of‐fit or predictivity. b) The uncertainty of the robustness calculation is smaller for cross‐validation at equal calculation load.</description><identifier>ISSN: 0886-9383</identifier><identifier>EISSN: 1099-128X</identifier><identifier>DOI: 10.1002/cem.3530</identifier><language>eng</language><publisher>Chichester: Wiley Subscription Services, Inc</publisher><subject>ANN ; Artificial neural networks ; Goodness of fit ; MLR ; model validation ; Parameter uncertainty ; PLS ; Robustness ; Statistical analysis ; Statistical methods ; SVR ; XGBOOST</subject><ispartof>Journal of chemometrics, 2024-06, Vol.38 (6), p.n/a</ispartof><rights>2024 John Wiley & Sons Ltd.</rights><rights>2024 John Wiley & Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c2540-abb95553d547fcd0a64ac1b36c8aee8aaa3e8cb717d09b1f00bb141458c4b253</cites><orcidid>0000-0002-5146-5700</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcem.3530$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcem.3530$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Lasfar, Rita</creatorcontrib><creatorcontrib>Tóth, Gergely</creatorcontrib><title>The difference of model robustness assessment using cross‐validation and bootstrap methods</title><title>Journal of chemometrics</title><description>The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validation: bootstrap and cross‐validation. We compared these validation metrics by checking their sample size dependence, rank correlations to other metrics and uncertainty. We used modeling methods from multivariate linear regression to artificial neural network on 14 open access datasets. We found that the metrics provide similar sample size dependence and correlation to other validation parameters. The individual uncertainty originating from the calculation recipes of the metrics is much smaller for both ways than the part caused by the selection of the training set or the training/test split. We concluded that the metrics of the two techniques are interchangeable, but the interpretation of cross‐validation parameters is easier according to their similar range to goodness‐of‐fit and prediction metrics. Furthermore, the variance originating from the random elements of the calculation of cross‐validation metrics is slightly smaller than those of bootstrap ones, if equal calculation load is applied.
The two methods provide close to the same information on robustness, but we suggest to use cross‐validation, because: a) Bootstrap values are outliers within the metrics for other validation tasks as goodness‐of‐fit or predictivity. b) The uncertainty of the robustness calculation is smaller for cross‐validation at equal calculation load.</description><subject>ANN</subject><subject>Artificial neural networks</subject><subject>Goodness of fit</subject><subject>MLR</subject><subject>model validation</subject><subject>Parameter uncertainty</subject><subject>PLS</subject><subject>Robustness</subject><subject>Statistical analysis</subject><subject>Statistical methods</subject><subject>SVR</subject><subject>XGBOOST</subject><issn>0886-9383</issn><issn>1099-128X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp10L1OwzAQB3ALgUQpSDyCJRaWlHMcJ86IqvIhFbF0YECy_HGhqZq42AmoG4_AM_IkpC0r03-4n-5Of0IuGUwYQHpjsZlwweGIjBiUZcJS-XJMRiBlnpRc8lNyFuMKYJjxbEReF0ukrq4qDNhapL6ijXe4psGbPnYtxkh1jEM02Ha0j3X7Rm3wMf58fX_ode10V_uW6tZR430Xu6A3tMFu6V08JyeVXke8-MsxWdzNFtOHZP58_zi9nSc2FRkk2phSCMGdyIrKOtB5pi0zPLdSI0qtNUdpTcEKB6VhFYAxLGOZkDYzqeBjcnVYuwn-vcfYqZXvQztcVBzynEkOvBjU9UHtvw9YqU2oGx22ioHaVaeG6tSuuoEmB_pZr3H7r1PT2dPe_wIVy3JP</recordid><startdate>202406</startdate><enddate>202406</enddate><creator>Lasfar, Rita</creator><creator>Tóth, Gergely</creator><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7U5</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-5146-5700</orcidid></search><sort><creationdate>202406</creationdate><title>The difference of model robustness assessment using cross‐validation and bootstrap methods</title><author>Lasfar, Rita ; Tóth, Gergely</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2540-abb95553d547fcd0a64ac1b36c8aee8aaa3e8cb717d09b1f00bb141458c4b253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>ANN</topic><topic>Artificial neural networks</topic><topic>Goodness of fit</topic><topic>MLR</topic><topic>model validation</topic><topic>Parameter uncertainty</topic><topic>PLS</topic><topic>Robustness</topic><topic>Statistical analysis</topic><topic>Statistical methods</topic><topic>SVR</topic><topic>XGBOOST</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lasfar, Rita</creatorcontrib><creatorcontrib>Tóth, Gergely</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of chemometrics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lasfar, Rita</au><au>Tóth, Gergely</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The difference of model robustness assessment using cross‐validation and bootstrap methods</atitle><jtitle>Journal of chemometrics</jtitle><date>2024-06</date><risdate>2024</risdate><volume>38</volume><issue>6</issue><epage>n/a</epage><issn>0886-9383</issn><eissn>1099-128X</eissn><abstract>The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validation: bootstrap and cross‐validation. We compared these validation metrics by checking their sample size dependence, rank correlations to other metrics and uncertainty. We used modeling methods from multivariate linear regression to artificial neural network on 14 open access datasets. We found that the metrics provide similar sample size dependence and correlation to other validation parameters. The individual uncertainty originating from the calculation recipes of the metrics is much smaller for both ways than the part caused by the selection of the training set or the training/test split. We concluded that the metrics of the two techniques are interchangeable, but the interpretation of cross‐validation parameters is easier according to their similar range to goodness‐of‐fit and prediction metrics. Furthermore, the variance originating from the random elements of the calculation of cross‐validation metrics is slightly smaller than those of bootstrap ones, if equal calculation load is applied.
The two methods provide close to the same information on robustness, but we suggest to use cross‐validation, because: a) Bootstrap values are outliers within the metrics for other validation tasks as goodness‐of‐fit or predictivity. b) The uncertainty of the robustness calculation is smaller for cross‐validation at equal calculation load.</abstract><cop>Chichester</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1002/cem.3530</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-5146-5700</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0886-9383 |
ispartof | Journal of chemometrics, 2024-06, Vol.38 (6), p.n/a |
issn | 0886-9383 1099-128X |
language | eng |
recordid | cdi_proquest_journals_3066183037 |
source | Wiley Online Library - AutoHoldings Journals |
subjects | ANN Artificial neural networks Goodness of fit MLR model validation Parameter uncertainty PLS Robustness Statistical analysis Statistical methods SVR XGBOOST |
title | The difference of model robustness assessment using cross‐validation and bootstrap methods |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A22%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20difference%20of%20model%20robustness%20assessment%20using%20cross%E2%80%90validation%20and%20bootstrap%20methods&rft.jtitle=Journal%20of%20chemometrics&rft.au=Lasfar,%20Rita&rft.date=2024-06&rft.volume=38&rft.issue=6&rft.epage=n/a&rft.issn=0886-9383&rft.eissn=1099-128X&rft_id=info:doi/10.1002/cem.3530&rft_dat=%3Cproquest_cross%3E3066183037%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3066183037&rft_id=info:pmid/&rfr_iscdi=true |