Applications of Sequential Learning for Medical Image Classification

Purpose: The aim of this work is to develop a neural network training framework for continual training of small amounts of medical imaging data and create heuristics to assess training in the absence of a hold-out validation or test set. Materials and Methods: We formulated a retrospective sequentia...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-09
Hauptverfasser: Naim, Sohaib, Caffo, Brian, Sair, Haris I, Jones, Craig K
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Naim, Sohaib
Caffo, Brian
Sair, Haris I
Jones, Craig K
description Purpose: The aim of this work is to develop a neural network training framework for continual training of small amounts of medical imaging data and create heuristics to assess training in the absence of a hold-out validation or test set. Materials and Methods: We formulated a retrospective sequential learning approach that would train and consistently update a model on mini-batches of medical images over time. We address problems that impede sequential learning such as overfitting, catastrophic forgetting, and concept drift through PyTorch convolutional neural networks (CNN) and publicly available Medical MNIST and NIH Chest X-Ray imaging datasets. We begin by comparing two methods for a sequentially trained CNN with and without base pre-training. We then transition to two methods of unique training and validation data recruitment to estimate full information extraction without overfitting. Lastly, we consider an example of real-life data that shows how our approach would see mainstream research implementation. Results: For the first experiment, both approaches successfully reach a ~95% accuracy threshold, although the short pre-training step enables sequential accuracy to plateau in fewer steps. The second experiment comparing two methods showed better performance with the second method which crosses the ~90% accuracy threshold much sooner. The final experiment showed a slight advantage with a pre-training step that allows the CNN to cross ~60% threshold much sooner than without pre-training. Conclusion: We have displayed sequential learning as a serviceable multi-classification technique statistically comparable to traditional CNNs that can acquire data in small increments feasible for clinically realistic scenarios.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2869390696</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2869390696</sourcerecordid><originalsourceid>FETCH-proquest_journals_28693906963</originalsourceid><addsrcrecordid>eNqNjLEKwjAURYMgWLT_8MC5EBMb21GqoqCT3UvQpKTEpOal_2-GfoDTGe65Z0EyxvmuqPaMrUiOOFBKmTiwsuQZOR3H0ZqXjMY7BK_hqb6TctFIC3clgzOuB-0DPNQ7aRZuH9kraKxENHo-bshSS4sqn7km28u5ba7FGHyqYewGPwWXpo5VouY1FbXg_1k_P5g6qQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2869390696</pqid></control><display><type>article</type><title>Applications of Sequential Learning for Medical Image Classification</title><source>Free E- Journals</source><creator>Naim, Sohaib ; Caffo, Brian ; Sair, Haris I ; Jones, Craig K</creator><creatorcontrib>Naim, Sohaib ; Caffo, Brian ; Sair, Haris I ; Jones, Craig K</creatorcontrib><description>Purpose: The aim of this work is to develop a neural network training framework for continual training of small amounts of medical imaging data and create heuristics to assess training in the absence of a hold-out validation or test set. Materials and Methods: We formulated a retrospective sequential learning approach that would train and consistently update a model on mini-batches of medical images over time. We address problems that impede sequential learning such as overfitting, catastrophic forgetting, and concept drift through PyTorch convolutional neural networks (CNN) and publicly available Medical MNIST and NIH Chest X-Ray imaging datasets. We begin by comparing two methods for a sequentially trained CNN with and without base pre-training. We then transition to two methods of unique training and validation data recruitment to estimate full information extraction without overfitting. Lastly, we consider an example of real-life data that shows how our approach would see mainstream research implementation. Results: For the first experiment, both approaches successfully reach a ~95% accuracy threshold, although the short pre-training step enables sequential accuracy to plateau in fewer steps. The second experiment comparing two methods showed better performance with the second method which crosses the ~90% accuracy threshold much sooner. The final experiment showed a slight advantage with a pre-training step that allows the CNN to cross ~60% threshold much sooner than without pre-training. Conclusion: We have displayed sequential learning as a serviceable multi-classification technique statistically comparable to traditional CNNs that can acquire data in small increments feasible for clinically realistic scenarios.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Artificial neural networks ; Image classification ; Information retrieval ; Learning ; Medical imaging ; Neural networks ; Training ; X ray imagery</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Naim, Sohaib</creatorcontrib><creatorcontrib>Caffo, Brian</creatorcontrib><creatorcontrib>Sair, Haris I</creatorcontrib><creatorcontrib>Jones, Craig K</creatorcontrib><title>Applications of Sequential Learning for Medical Image Classification</title><title>arXiv.org</title><description>Purpose: The aim of this work is to develop a neural network training framework for continual training of small amounts of medical imaging data and create heuristics to assess training in the absence of a hold-out validation or test set. Materials and Methods: We formulated a retrospective sequential learning approach that would train and consistently update a model on mini-batches of medical images over time. We address problems that impede sequential learning such as overfitting, catastrophic forgetting, and concept drift through PyTorch convolutional neural networks (CNN) and publicly available Medical MNIST and NIH Chest X-Ray imaging datasets. We begin by comparing two methods for a sequentially trained CNN with and without base pre-training. We then transition to two methods of unique training and validation data recruitment to estimate full information extraction without overfitting. Lastly, we consider an example of real-life data that shows how our approach would see mainstream research implementation. Results: For the first experiment, both approaches successfully reach a ~95% accuracy threshold, although the short pre-training step enables sequential accuracy to plateau in fewer steps. The second experiment comparing two methods showed better performance with the second method which crosses the ~90% accuracy threshold much sooner. The final experiment showed a slight advantage with a pre-training step that allows the CNN to cross ~60% threshold much sooner than without pre-training. Conclusion: We have displayed sequential learning as a serviceable multi-classification technique statistically comparable to traditional CNNs that can acquire data in small increments feasible for clinically realistic scenarios.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Image classification</subject><subject>Information retrieval</subject><subject>Learning</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Training</subject><subject>X ray imagery</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjLEKwjAURYMgWLT_8MC5EBMb21GqoqCT3UvQpKTEpOal_2-GfoDTGe65Z0EyxvmuqPaMrUiOOFBKmTiwsuQZOR3H0ZqXjMY7BK_hqb6TctFIC3clgzOuB-0DPNQ7aRZuH9kraKxENHo-bshSS4sqn7km28u5ba7FGHyqYewGPwWXpo5VouY1FbXg_1k_P5g6qQ</recordid><startdate>20230926</startdate><enddate>20230926</enddate><creator>Naim, Sohaib</creator><creator>Caffo, Brian</creator><creator>Sair, Haris I</creator><creator>Jones, Craig K</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230926</creationdate><title>Applications of Sequential Learning for Medical Image Classification</title><author>Naim, Sohaib ; Caffo, Brian ; Sair, Haris I ; Jones, Craig K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28693906963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Image classification</topic><topic>Information retrieval</topic><topic>Learning</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Training</topic><topic>X ray imagery</topic><toplevel>online_resources</toplevel><creatorcontrib>Naim, Sohaib</creatorcontrib><creatorcontrib>Caffo, Brian</creatorcontrib><creatorcontrib>Sair, Haris I</creatorcontrib><creatorcontrib>Jones, Craig K</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Naim, Sohaib</au><au>Caffo, Brian</au><au>Sair, Haris I</au><au>Jones, Craig K</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Applications of Sequential Learning for Medical Image Classification</atitle><jtitle>arXiv.org</jtitle><date>2023-09-26</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Purpose: The aim of this work is to develop a neural network training framework for continual training of small amounts of medical imaging data and create heuristics to assess training in the absence of a hold-out validation or test set. Materials and Methods: We formulated a retrospective sequential learning approach that would train and consistently update a model on mini-batches of medical images over time. We address problems that impede sequential learning such as overfitting, catastrophic forgetting, and concept drift through PyTorch convolutional neural networks (CNN) and publicly available Medical MNIST and NIH Chest X-Ray imaging datasets. We begin by comparing two methods for a sequentially trained CNN with and without base pre-training. We then transition to two methods of unique training and validation data recruitment to estimate full information extraction without overfitting. Lastly, we consider an example of real-life data that shows how our approach would see mainstream research implementation. Results: For the first experiment, both approaches successfully reach a ~95% accuracy threshold, although the short pre-training step enables sequential accuracy to plateau in fewer steps. The second experiment comparing two methods showed better performance with the second method which crosses the ~90% accuracy threshold much sooner. The final experiment showed a slight advantage with a pre-training step that allows the CNN to cross ~60% threshold much sooner than without pre-training. Conclusion: We have displayed sequential learning as a serviceable multi-classification technique statistically comparable to traditional CNNs that can acquire data in small increments feasible for clinically realistic scenarios.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2869390696
source Free E- Journals
subjects Accuracy
Artificial neural networks
Image classification
Information retrieval
Learning
Medical imaging
Neural networks
Training
X ray imagery
title Applications of Sequential Learning for Medical Image Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T05%3A09%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Applications%20of%20Sequential%20Learning%20for%20Medical%20Image%20Classification&rft.jtitle=arXiv.org&rft.au=Naim,%20Sohaib&rft.date=2023-09-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2869390696%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2869390696&rft_id=info:pmid/&rfr_iscdi=true