Targeted transfer learning to improve performance in small medical physics datasets
Purpose To perform an in‐depth evaluation of current state of the art techniques in training neural networks to identify appropriate approaches in small datasets. Method In total, 112,120 frontal‐view X‐ray images from the NIH ChestXray14 dataset were used in our analysis. Two tasks were studied: un...
Gespeichert in:
Veröffentlicht in: | Medical physics (Lancaster) 2020-12, Vol.47 (12), p.6246-6256 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6256 |
---|---|
container_issue | 12 |
container_start_page | 6246 |
container_title | Medical physics (Lancaster) |
container_volume | 47 |
creator | Romero, Miguel Interian, Yannet Solberg, Timothy Valdes, Gilmer |
description | Purpose
To perform an in‐depth evaluation of current state of the art techniques in training neural networks to identify appropriate approaches in small datasets.
Method
In total, 112,120 frontal‐view X‐ray images from the NIH ChestXray14 dataset were used in our analysis. Two tasks were studied: unbalanced multi‐label classification of 14 diseases, and binary classification of pneumonia vs non‐pneumonia. All datasets were randomly split into training, validation, and testing (70%, 10%, and 20%). Two popular convolution neural networks (CNNs), DensNet121 and ResNet50, were trained using PyTorch. We performed several experiments to test: (a) whether transfer learning using pretrained networks on ImageNet are of value to medical imaging/physics tasks (e.g., predicting toxicity from radiographic images after training on images from the internet), (b) whether using pretrained networks trained on problems that are similar to the target task helps transfer learning (e.g., using X‐ray pretrained networks for X‐ray target tasks), (c) whether freeze deep layers or change all weights provides an optimal transfer learning strategy, (d) the best strategy for the learning rate policy, and (e) what quantity of data is needed in order to appropriately deploy these various strategies (N = 50 to N = 77 880).
Results
In the multi‐label problem, DensNet121 needed at least 1600 patients to be comparable to, and 10 000 to outperform, radiomics‐based logistic regression. In classifying pneumonia vs non‐pneumonia, both CNN and radiomics‐based methods performed poorly when N 35 000), little or no tweaking is needed to obtain impressive performance. While transfer learning using X‐ray images from other anatomical sites improves performance, we also observed a similar boost by using pretrained networks from ImageNet. Having source images from the same anatomical site, however, outperforms every other methodology, by up to 15%. In this case, DL models can be trained with as little as N = 50.
Conclusions
While training DL models in small datasets (N 35 000). Using transfer learning with images from the |
doi_str_mv | 10.1002/mp.14507 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2448405734</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2448405734</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3877-c4a3791fb4df5f537d33ac1f48766c025e04d063d15d3c1acd76c543793d0add3</originalsourceid><addsrcrecordid>eNp1kEtLAzEUhYMotlbBXyBZupl6M0km7VKKL6goWNchzaNGkpkxmSr994626srV3Xz345yD0CmBMQEoL2I7JoyD2EPDkglasBKm-2gIMGVFyYAP0FHOrwBQUQ6HaEApgCCkHKKnhUor21mDu6Tq7GzCwapU-3qFuwb72Kbm3eLWJtekqGptsa9xjioEHK3xWgXcvmyy1xkb1alsu3yMDpwK2Z7s7gg9X18tZrfF_OHmbnY5LzSdCFFopqiYErdkxnHHqTCUKk0cm4iq0lByC8z0iQ3hhmqitBGV5qz_oQaUMXSEzrfePuPb2uZORp-1DUHVtllnWTI26csLyv5QnZqck3WyTT6qtJEE5NeEMrbye8IePdtZ18u-4S_4s1kPFFvgwwe7-Vck7x-3wk-mw3ok</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2448405734</pqid></control><display><type>article</type><title>Targeted transfer learning to improve performance in small medical physics datasets</title><source>MEDLINE</source><source>Wiley Online Library All Journals</source><source>Alma/SFX Local Collection</source><creator>Romero, Miguel ; Interian, Yannet ; Solberg, Timothy ; Valdes, Gilmer</creator><creatorcontrib>Romero, Miguel ; Interian, Yannet ; Solberg, Timothy ; Valdes, Gilmer</creatorcontrib><description>Purpose
To perform an in‐depth evaluation of current state of the art techniques in training neural networks to identify appropriate approaches in small datasets.
Method
In total, 112,120 frontal‐view X‐ray images from the NIH ChestXray14 dataset were used in our analysis. Two tasks were studied: unbalanced multi‐label classification of 14 diseases, and binary classification of pneumonia vs non‐pneumonia. All datasets were randomly split into training, validation, and testing (70%, 10%, and 20%). Two popular convolution neural networks (CNNs), DensNet121 and ResNet50, were trained using PyTorch. We performed several experiments to test: (a) whether transfer learning using pretrained networks on ImageNet are of value to medical imaging/physics tasks (e.g., predicting toxicity from radiographic images after training on images from the internet), (b) whether using pretrained networks trained on problems that are similar to the target task helps transfer learning (e.g., using X‐ray pretrained networks for X‐ray target tasks), (c) whether freeze deep layers or change all weights provides an optimal transfer learning strategy, (d) the best strategy for the learning rate policy, and (e) what quantity of data is needed in order to appropriately deploy these various strategies (N = 50 to N = 77 880).
Results
In the multi‐label problem, DensNet121 needed at least 1600 patients to be comparable to, and 10 000 to outperform, radiomics‐based logistic regression. In classifying pneumonia vs non‐pneumonia, both CNN and radiomics‐based methods performed poorly when N < 2000. For small datasets ( < 2000), however, a significant boost in performance (>15% increase on AUC) comes from a good selection of the transfer learning dataset, dropout, cycling learning rate, and freezing and unfreezing of deep layers as training progresses. In contrast, if sufficient data are available (>35 000), little or no tweaking is needed to obtain impressive performance. While transfer learning using X‐ray images from other anatomical sites improves performance, we also observed a similar boost by using pretrained networks from ImageNet. Having source images from the same anatomical site, however, outperforms every other methodology, by up to 15%. In this case, DL models can be trained with as little as N = 50.
Conclusions
While training DL models in small datasets (N < 2000) is challenging, no tweaking is necessary for bigger datasets (N > 35 000). Using transfer learning with images from the same anatomical site can yield remarkable performance in new tasks with as few as N = 50. Surprisingly, we did not find any advantage for using images from other anatomical sites over networks that have been trained using ImageNet. This indicates that features learned may not be as general as currently believed, and performance decays rapidly even by just changing the anatomical site of the images.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.14507</identifier><identifier>PMID: 33007112</identifier><language>eng</language><publisher>United States</publisher><subject>Deep Learning ; Humans ; machine learning ; Neural Networks, Computer ; Physics ; small datasets ; X-Rays</subject><ispartof>Medical physics (Lancaster), 2020-12, Vol.47 (12), p.6246-6256</ispartof><rights>2020 American Association of Physicists in Medicine</rights><rights>2020 American Association of Physicists in Medicine.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3877-c4a3791fb4df5f537d33ac1f48766c025e04d063d15d3c1acd76c543793d0add3</citedby><cites>FETCH-LOGICAL-c3877-c4a3791fb4df5f537d33ac1f48766c025e04d063d15d3c1acd76c543793d0add3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fmp.14507$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fmp.14507$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1416,27923,27924,45573,45574</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33007112$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Romero, Miguel</creatorcontrib><creatorcontrib>Interian, Yannet</creatorcontrib><creatorcontrib>Solberg, Timothy</creatorcontrib><creatorcontrib>Valdes, Gilmer</creatorcontrib><title>Targeted transfer learning to improve performance in small medical physics datasets</title><title>Medical physics (Lancaster)</title><addtitle>Med Phys</addtitle><description>Purpose
To perform an in‐depth evaluation of current state of the art techniques in training neural networks to identify appropriate approaches in small datasets.
Method
In total, 112,120 frontal‐view X‐ray images from the NIH ChestXray14 dataset were used in our analysis. Two tasks were studied: unbalanced multi‐label classification of 14 diseases, and binary classification of pneumonia vs non‐pneumonia. All datasets were randomly split into training, validation, and testing (70%, 10%, and 20%). Two popular convolution neural networks (CNNs), DensNet121 and ResNet50, were trained using PyTorch. We performed several experiments to test: (a) whether transfer learning using pretrained networks on ImageNet are of value to medical imaging/physics tasks (e.g., predicting toxicity from radiographic images after training on images from the internet), (b) whether using pretrained networks trained on problems that are similar to the target task helps transfer learning (e.g., using X‐ray pretrained networks for X‐ray target tasks), (c) whether freeze deep layers or change all weights provides an optimal transfer learning strategy, (d) the best strategy for the learning rate policy, and (e) what quantity of data is needed in order to appropriately deploy these various strategies (N = 50 to N = 77 880).
Results
In the multi‐label problem, DensNet121 needed at least 1600 patients to be comparable to, and 10 000 to outperform, radiomics‐based logistic regression. In classifying pneumonia vs non‐pneumonia, both CNN and radiomics‐based methods performed poorly when N < 2000. For small datasets ( < 2000), however, a significant boost in performance (>15% increase on AUC) comes from a good selection of the transfer learning dataset, dropout, cycling learning rate, and freezing and unfreezing of deep layers as training progresses. In contrast, if sufficient data are available (>35 000), little or no tweaking is needed to obtain impressive performance. While transfer learning using X‐ray images from other anatomical sites improves performance, we also observed a similar boost by using pretrained networks from ImageNet. Having source images from the same anatomical site, however, outperforms every other methodology, by up to 15%. In this case, DL models can be trained with as little as N = 50.
Conclusions
While training DL models in small datasets (N < 2000) is challenging, no tweaking is necessary for bigger datasets (N > 35 000). Using transfer learning with images from the same anatomical site can yield remarkable performance in new tasks with as few as N = 50. Surprisingly, we did not find any advantage for using images from other anatomical sites over networks that have been trained using ImageNet. This indicates that features learned may not be as general as currently believed, and performance decays rapidly even by just changing the anatomical site of the images.</description><subject>Deep Learning</subject><subject>Humans</subject><subject>machine learning</subject><subject>Neural Networks, Computer</subject><subject>Physics</subject><subject>small datasets</subject><subject>X-Rays</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp1kEtLAzEUhYMotlbBXyBZupl6M0km7VKKL6goWNchzaNGkpkxmSr994626srV3Xz345yD0CmBMQEoL2I7JoyD2EPDkglasBKm-2gIMGVFyYAP0FHOrwBQUQ6HaEApgCCkHKKnhUor21mDu6Tq7GzCwapU-3qFuwb72Kbm3eLWJtekqGptsa9xjioEHK3xWgXcvmyy1xkb1alsu3yMDpwK2Z7s7gg9X18tZrfF_OHmbnY5LzSdCFFopqiYErdkxnHHqTCUKk0cm4iq0lByC8z0iQ3hhmqitBGV5qz_oQaUMXSEzrfePuPb2uZORp-1DUHVtllnWTI26csLyv5QnZqck3WyTT6qtJEE5NeEMrbye8IePdtZ18u-4S_4s1kPFFvgwwe7-Vck7x-3wk-mw3ok</recordid><startdate>202012</startdate><enddate>202012</enddate><creator>Romero, Miguel</creator><creator>Interian, Yannet</creator><creator>Solberg, Timothy</creator><creator>Valdes, Gilmer</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202012</creationdate><title>Targeted transfer learning to improve performance in small medical physics datasets</title><author>Romero, Miguel ; Interian, Yannet ; Solberg, Timothy ; Valdes, Gilmer</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3877-c4a3791fb4df5f537d33ac1f48766c025e04d063d15d3c1acd76c543793d0add3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Deep Learning</topic><topic>Humans</topic><topic>machine learning</topic><topic>Neural Networks, Computer</topic><topic>Physics</topic><topic>small datasets</topic><topic>X-Rays</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Romero, Miguel</creatorcontrib><creatorcontrib>Interian, Yannet</creatorcontrib><creatorcontrib>Solberg, Timothy</creatorcontrib><creatorcontrib>Valdes, Gilmer</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Romero, Miguel</au><au>Interian, Yannet</au><au>Solberg, Timothy</au><au>Valdes, Gilmer</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Targeted transfer learning to improve performance in small medical physics datasets</atitle><jtitle>Medical physics (Lancaster)</jtitle><addtitle>Med Phys</addtitle><date>2020-12</date><risdate>2020</risdate><volume>47</volume><issue>12</issue><spage>6246</spage><epage>6256</epage><pages>6246-6256</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Purpose
To perform an in‐depth evaluation of current state of the art techniques in training neural networks to identify appropriate approaches in small datasets.
Method
In total, 112,120 frontal‐view X‐ray images from the NIH ChestXray14 dataset were used in our analysis. Two tasks were studied: unbalanced multi‐label classification of 14 diseases, and binary classification of pneumonia vs non‐pneumonia. All datasets were randomly split into training, validation, and testing (70%, 10%, and 20%). Two popular convolution neural networks (CNNs), DensNet121 and ResNet50, were trained using PyTorch. We performed several experiments to test: (a) whether transfer learning using pretrained networks on ImageNet are of value to medical imaging/physics tasks (e.g., predicting toxicity from radiographic images after training on images from the internet), (b) whether using pretrained networks trained on problems that are similar to the target task helps transfer learning (e.g., using X‐ray pretrained networks for X‐ray target tasks), (c) whether freeze deep layers or change all weights provides an optimal transfer learning strategy, (d) the best strategy for the learning rate policy, and (e) what quantity of data is needed in order to appropriately deploy these various strategies (N = 50 to N = 77 880).
Results
In the multi‐label problem, DensNet121 needed at least 1600 patients to be comparable to, and 10 000 to outperform, radiomics‐based logistic regression. In classifying pneumonia vs non‐pneumonia, both CNN and radiomics‐based methods performed poorly when N < 2000. For small datasets ( < 2000), however, a significant boost in performance (>15% increase on AUC) comes from a good selection of the transfer learning dataset, dropout, cycling learning rate, and freezing and unfreezing of deep layers as training progresses. In contrast, if sufficient data are available (>35 000), little or no tweaking is needed to obtain impressive performance. While transfer learning using X‐ray images from other anatomical sites improves performance, we also observed a similar boost by using pretrained networks from ImageNet. Having source images from the same anatomical site, however, outperforms every other methodology, by up to 15%. In this case, DL models can be trained with as little as N = 50.
Conclusions
While training DL models in small datasets (N < 2000) is challenging, no tweaking is necessary for bigger datasets (N > 35 000). Using transfer learning with images from the same anatomical site can yield remarkable performance in new tasks with as few as N = 50. Surprisingly, we did not find any advantage for using images from other anatomical sites over networks that have been trained using ImageNet. This indicates that features learned may not be as general as currently believed, and performance decays rapidly even by just changing the anatomical site of the images.</abstract><cop>United States</cop><pmid>33007112</pmid><doi>10.1002/mp.14507</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-2405 |
ispartof | Medical physics (Lancaster), 2020-12, Vol.47 (12), p.6246-6256 |
issn | 0094-2405 2473-4209 |
language | eng |
recordid | cdi_proquest_miscellaneous_2448405734 |
source | MEDLINE; Wiley Online Library All Journals; Alma/SFX Local Collection |
subjects | Deep Learning Humans machine learning Neural Networks, Computer Physics small datasets X-Rays |
title | Targeted transfer learning to improve performance in small medical physics datasets |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T13%3A09%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Targeted%20transfer%20learning%20to%20improve%20performance%20in%20small%20medical%20physics%20datasets&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Romero,%20Miguel&rft.date=2020-12&rft.volume=47&rft.issue=12&rft.spage=6246&rft.epage=6256&rft.pages=6246-6256&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.14507&rft_dat=%3Cproquest_cross%3E2448405734%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2448405734&rft_id=info:pmid/33007112&rfr_iscdi=true |