Adversarial training for prostate cancer classification using magnetic resonance imaging

To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis. This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shangh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Quantitative imaging in medicine and surgery 2022-06, Vol.12 (6), p.3276-3287
Hauptverfasser: Hu, Lei, Zhou, Da-Wei, Guo, Xiang-Yu, Xu, Wen-Hao, Wei, Li-Ming, Zhao, Jun-Gong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3287
container_issue 6
container_start_page 3276
container_title Quantitative imaging in medicine and surgery
container_volume 12
creator Hu, Lei
Zhou, Da-Wei
Guo, Xiang-Yu
Xu, Wen-Hao
Wei, Li-Ming
Zhao, Jun-Gong
description To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis. This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shanghai Jiao Tong University Affiliated Sixth People's Hospital and Eighth People's Hospital; test set, 99 patients from Renmin Hospital of Wuhan University). Two binary classification deep learning models for clinically significant prostate cancer classification [PM1, pretraining Visual Geometry Group network (VGGNet)-16-based model 1; PM2, pretraining residual network (ResNet)-50-based model 2] and two multiclass classification deep learning models for prostate cancer grading (PM3, pretraining VGGNet-16-based model 3; PM4: pretraining ResNet-50-based model 4) were built using apparent diffusion coefficient and T2-weighted images. These models were then retrained with adversarial examples starting from the initial random model parameters (AM1, adversarial training VGGNet-16 model 1; AM2, adversarial training ResNet-50 model 2; AM3, adversarial training VGGNet-16 model 3; AM4, adversarial training ResNet-50 model 4, respectively). To verify whether adversarial training can improve the diagnostic model's effectiveness, we compared the diagnostic performance of the deep learning methods before and after adversarial training. Receiver operating characteristic curve analysis was performed to evaluate significant prostate cancer classification models. Differences in areas under the curve (AUCs) were compared using Delong's tests. The quadratic weighted kappa score was used to verify the PCa grading models. AM1 and AM2 had significantly higher AUCs than PM1 and PM2 in the internal validation dataset (0.84 0.89 and 0.83 0.87) and test dataset (0.73 0.86 and 0.72 0.82). AM3 and AM4 showed higher κ values than PM3 and PM4 in the internal validation dataset {0.266 [95% confidence interval (CI): 0.152-0.379] 0.292 (95% CI: 0.178-0.405) and 0.254 (95% CI: 0.159-0.390) 0.279 (95% CI: 0.163-0.396)} and test set [0.196 (95% CI: 0.029-0.362) 0.268 (95% CI: 0.109-0.427) and 0.183 (95% CI: 0.015-0.351) 0.228 (95% CI: 0.068-0.389)]. Using adversarial examples to train prostate cancer classification deep learning models can improve their generalizability and classification abilities.
doi_str_mv 10.21037/qims-21-1089
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9131330</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2673358408</sourcerecordid><originalsourceid>FETCH-LOGICAL-c453t-4b9760a048afd38644af0b584ff75f63fc40b369740c752b3cd1584149f9556c3</originalsourceid><addsrcrecordid>eNpVkUtLxDAUhYMojoyzdCtduqkmuUkfG2EYfMGAGwV3IU2TMdKmM0k74L83dR5oNrnkfJx7cw9CVwTfUoIhv9vYNqSUpAQX5Qm6oJRCygBnp4ealnSCZiF84XjyguQEn6MJ8IzzAsgF-pjXW-2D9FY2Se-lddatEtP5ZO270MteJ0o6pX2iGhmCNVbJ3nYuGcIItnLldG9V4nXo3AgmNr5F6RKdGdkEPdvfU_T--PC2eE6Xr08vi_kyVYxDn7KqzDMsMSukqaHIGJMGV7xgxuTcZGAUwxVkZc6wyjmtQNUkqoSVpuQ8UzBF9zvf9VC1ulbaxV80Yu3jHP5bdNKK_4qzn2LVbUVJgADgaHCzN_DdZtChF60NSjeNdLobgqBZDhBb4iKi6Q5VcTfBa3NsQ7D4DUSMgcRSjIFE_vrvbEf6sH74AUzmiOM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2673358408</pqid></control><display><type>article</type><title>Adversarial training for prostate cancer classification using magnetic resonance imaging</title><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>PubMed Central</source><creator>Hu, Lei ; Zhou, Da-Wei ; Guo, Xiang-Yu ; Xu, Wen-Hao ; Wei, Li-Ming ; Zhao, Jun-Gong</creator><creatorcontrib>Hu, Lei ; Zhou, Da-Wei ; Guo, Xiang-Yu ; Xu, Wen-Hao ; Wei, Li-Ming ; Zhao, Jun-Gong</creatorcontrib><description>To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis. This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shanghai Jiao Tong University Affiliated Sixth People's Hospital and Eighth People's Hospital; test set, 99 patients from Renmin Hospital of Wuhan University). Two binary classification deep learning models for clinically significant prostate cancer classification [PM1, pretraining Visual Geometry Group network (VGGNet)-16-based model 1; PM2, pretraining residual network (ResNet)-50-based model 2] and two multiclass classification deep learning models for prostate cancer grading (PM3, pretraining VGGNet-16-based model 3; PM4: pretraining ResNet-50-based model 4) were built using apparent diffusion coefficient and T2-weighted images. These models were then retrained with adversarial examples starting from the initial random model parameters (AM1, adversarial training VGGNet-16 model 1; AM2, adversarial training ResNet-50 model 2; AM3, adversarial training VGGNet-16 model 3; AM4, adversarial training ResNet-50 model 4, respectively). To verify whether adversarial training can improve the diagnostic model's effectiveness, we compared the diagnostic performance of the deep learning methods before and after adversarial training. Receiver operating characteristic curve analysis was performed to evaluate significant prostate cancer classification models. Differences in areas under the curve (AUCs) were compared using Delong's tests. The quadratic weighted kappa score was used to verify the PCa grading models. AM1 and AM2 had significantly higher AUCs than PM1 and PM2 in the internal validation dataset (0.84 0.89 and 0.83 0.87) and test dataset (0.73 0.86 and 0.72 0.82). AM3 and AM4 showed higher κ values than PM3 and PM4 in the internal validation dataset {0.266 [95% confidence interval (CI): 0.152-0.379] 0.292 (95% CI: 0.178-0.405) and 0.254 (95% CI: 0.159-0.390) 0.279 (95% CI: 0.163-0.396)} and test set [0.196 (95% CI: 0.029-0.362) 0.268 (95% CI: 0.109-0.427) and 0.183 (95% CI: 0.015-0.351) 0.228 (95% CI: 0.068-0.389)]. Using adversarial examples to train prostate cancer classification deep learning models can improve their generalizability and classification abilities.</description><identifier>ISSN: 2223-4292</identifier><identifier>EISSN: 2223-4306</identifier><identifier>DOI: 10.21037/qims-21-1089</identifier><identifier>PMID: 35655831</identifier><language>eng</language><publisher>China: AME Publishing Company</publisher><subject>Original</subject><ispartof>Quantitative imaging in medicine and surgery, 2022-06, Vol.12 (6), p.3276-3287</ispartof><rights>2022 Quantitative Imaging in Medicine and Surgery. All rights reserved.</rights><rights>2022 Quantitative Imaging in Medicine and Surgery. All rights reserved. 2022 Quantitative Imaging in Medicine and Surgery.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c453t-4b9760a048afd38644af0b584ff75f63fc40b369740c752b3cd1584149f9556c3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9131330/pdf/$$EPDF$$P50$$Gpubmedcentral$$H</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9131330/$$EHTML$$P50$$Gpubmedcentral$$H</linktohtml><link.rule.ids>230,314,727,780,784,885,27915,27916,53782,53784</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35655831$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Hu, Lei</creatorcontrib><creatorcontrib>Zhou, Da-Wei</creatorcontrib><creatorcontrib>Guo, Xiang-Yu</creatorcontrib><creatorcontrib>Xu, Wen-Hao</creatorcontrib><creatorcontrib>Wei, Li-Ming</creatorcontrib><creatorcontrib>Zhao, Jun-Gong</creatorcontrib><title>Adversarial training for prostate cancer classification using magnetic resonance imaging</title><title>Quantitative imaging in medicine and surgery</title><addtitle>Quant Imaging Med Surg</addtitle><description>To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis. This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shanghai Jiao Tong University Affiliated Sixth People's Hospital and Eighth People's Hospital; test set, 99 patients from Renmin Hospital of Wuhan University). Two binary classification deep learning models for clinically significant prostate cancer classification [PM1, pretraining Visual Geometry Group network (VGGNet)-16-based model 1; PM2, pretraining residual network (ResNet)-50-based model 2] and two multiclass classification deep learning models for prostate cancer grading (PM3, pretraining VGGNet-16-based model 3; PM4: pretraining ResNet-50-based model 4) were built using apparent diffusion coefficient and T2-weighted images. These models were then retrained with adversarial examples starting from the initial random model parameters (AM1, adversarial training VGGNet-16 model 1; AM2, adversarial training ResNet-50 model 2; AM3, adversarial training VGGNet-16 model 3; AM4, adversarial training ResNet-50 model 4, respectively). To verify whether adversarial training can improve the diagnostic model's effectiveness, we compared the diagnostic performance of the deep learning methods before and after adversarial training. Receiver operating characteristic curve analysis was performed to evaluate significant prostate cancer classification models. Differences in areas under the curve (AUCs) were compared using Delong's tests. The quadratic weighted kappa score was used to verify the PCa grading models. AM1 and AM2 had significantly higher AUCs than PM1 and PM2 in the internal validation dataset (0.84 0.89 and 0.83 0.87) and test dataset (0.73 0.86 and 0.72 0.82). AM3 and AM4 showed higher κ values than PM3 and PM4 in the internal validation dataset {0.266 [95% confidence interval (CI): 0.152-0.379] 0.292 (95% CI: 0.178-0.405) and 0.254 (95% CI: 0.159-0.390) 0.279 (95% CI: 0.163-0.396)} and test set [0.196 (95% CI: 0.029-0.362) 0.268 (95% CI: 0.109-0.427) and 0.183 (95% CI: 0.015-0.351) 0.228 (95% CI: 0.068-0.389)]. Using adversarial examples to train prostate cancer classification deep learning models can improve their generalizability and classification abilities.</description><subject>Original</subject><issn>2223-4292</issn><issn>2223-4306</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpVkUtLxDAUhYMojoyzdCtduqkmuUkfG2EYfMGAGwV3IU2TMdKmM0k74L83dR5oNrnkfJx7cw9CVwTfUoIhv9vYNqSUpAQX5Qm6oJRCygBnp4ealnSCZiF84XjyguQEn6MJ8IzzAsgF-pjXW-2D9FY2Se-lddatEtP5ZO270MteJ0o6pX2iGhmCNVbJ3nYuGcIItnLldG9V4nXo3AgmNr5F6RKdGdkEPdvfU_T--PC2eE6Xr08vi_kyVYxDn7KqzDMsMSukqaHIGJMGV7xgxuTcZGAUwxVkZc6wyjmtQNUkqoSVpuQ8UzBF9zvf9VC1ulbaxV80Yu3jHP5bdNKK_4qzn2LVbUVJgADgaHCzN_DdZtChF60NSjeNdLobgqBZDhBb4iKi6Q5VcTfBa3NsQ7D4DUSMgcRSjIFE_vrvbEf6sH74AUzmiOM</recordid><startdate>202206</startdate><enddate>202206</enddate><creator>Hu, Lei</creator><creator>Zhou, Da-Wei</creator><creator>Guo, Xiang-Yu</creator><creator>Xu, Wen-Hao</creator><creator>Wei, Li-Ming</creator><creator>Zhao, Jun-Gong</creator><general>AME Publishing Company</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>202206</creationdate><title>Adversarial training for prostate cancer classification using magnetic resonance imaging</title><author>Hu, Lei ; Zhou, Da-Wei ; Guo, Xiang-Yu ; Xu, Wen-Hao ; Wei, Li-Ming ; Zhao, Jun-Gong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c453t-4b9760a048afd38644af0b584ff75f63fc40b369740c752b3cd1584149f9556c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Original</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Lei</creatorcontrib><creatorcontrib>Zhou, Da-Wei</creatorcontrib><creatorcontrib>Guo, Xiang-Yu</creatorcontrib><creatorcontrib>Xu, Wen-Hao</creatorcontrib><creatorcontrib>Wei, Li-Ming</creatorcontrib><creatorcontrib>Zhao, Jun-Gong</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Quantitative imaging in medicine and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Lei</au><au>Zhou, Da-Wei</au><au>Guo, Xiang-Yu</au><au>Xu, Wen-Hao</au><au>Wei, Li-Ming</au><au>Zhao, Jun-Gong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial training for prostate cancer classification using magnetic resonance imaging</atitle><jtitle>Quantitative imaging in medicine and surgery</jtitle><addtitle>Quant Imaging Med Surg</addtitle><date>2022-06</date><risdate>2022</risdate><volume>12</volume><issue>6</issue><spage>3276</spage><epage>3287</epage><pages>3276-3287</pages><issn>2223-4292</issn><eissn>2223-4306</eissn><abstract>To use adversarial training to increase the generalizability and diagnostic accuracy of deep learning models for prostate cancer diagnosis. This multicenter study retrospectively included 396 prostate cancer patients who underwent magnetic resonance imaging (development set, 297 patients from Shanghai Jiao Tong University Affiliated Sixth People's Hospital and Eighth People's Hospital; test set, 99 patients from Renmin Hospital of Wuhan University). Two binary classification deep learning models for clinically significant prostate cancer classification [PM1, pretraining Visual Geometry Group network (VGGNet)-16-based model 1; PM2, pretraining residual network (ResNet)-50-based model 2] and two multiclass classification deep learning models for prostate cancer grading (PM3, pretraining VGGNet-16-based model 3; PM4: pretraining ResNet-50-based model 4) were built using apparent diffusion coefficient and T2-weighted images. These models were then retrained with adversarial examples starting from the initial random model parameters (AM1, adversarial training VGGNet-16 model 1; AM2, adversarial training ResNet-50 model 2; AM3, adversarial training VGGNet-16 model 3; AM4, adversarial training ResNet-50 model 4, respectively). To verify whether adversarial training can improve the diagnostic model's effectiveness, we compared the diagnostic performance of the deep learning methods before and after adversarial training. Receiver operating characteristic curve analysis was performed to evaluate significant prostate cancer classification models. Differences in areas under the curve (AUCs) were compared using Delong's tests. The quadratic weighted kappa score was used to verify the PCa grading models. AM1 and AM2 had significantly higher AUCs than PM1 and PM2 in the internal validation dataset (0.84 0.89 and 0.83 0.87) and test dataset (0.73 0.86 and 0.72 0.82). AM3 and AM4 showed higher κ values than PM3 and PM4 in the internal validation dataset {0.266 [95% confidence interval (CI): 0.152-0.379] 0.292 (95% CI: 0.178-0.405) and 0.254 (95% CI: 0.159-0.390) 0.279 (95% CI: 0.163-0.396)} and test set [0.196 (95% CI: 0.029-0.362) 0.268 (95% CI: 0.109-0.427) and 0.183 (95% CI: 0.015-0.351) 0.228 (95% CI: 0.068-0.389)]. Using adversarial examples to train prostate cancer classification deep learning models can improve their generalizability and classification abilities.</abstract><cop>China</cop><pub>AME Publishing Company</pub><pmid>35655831</pmid><doi>10.21037/qims-21-1089</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2223-4292
ispartof Quantitative imaging in medicine and surgery, 2022-06, Vol.12 (6), p.3276-3287
issn 2223-4292
2223-4306
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9131330
source Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; PubMed Central
subjects Original
title Adversarial training for prostate cancer classification using magnetic resonance imaging
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T07%3A12%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20training%20for%20prostate%20cancer%20classification%20using%20magnetic%20resonance%20imaging&rft.jtitle=Quantitative%20imaging%20in%20medicine%20and%20surgery&rft.au=Hu,%20Lei&rft.date=2022-06&rft.volume=12&rft.issue=6&rft.spage=3276&rft.epage=3287&rft.pages=3276-3287&rft.issn=2223-4292&rft.eissn=2223-4306&rft_id=info:doi/10.21037/qims-21-1089&rft_dat=%3Cproquest_pubme%3E2673358408%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2673358408&rft_id=info:pmid/35655831&rfr_iscdi=true