Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19

Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolving systems 2025-02, Vol.16 (1), p.19, Article 19
Hauptverfasser: Annapoorani, G., Manikandan, P., Genitha, C. Heltin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page 19
container_title Evolving systems
container_volume 16
creator Annapoorani, G.
Manikandan, P.
Genitha, C. Heltin
description Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential task. Convolutional Neural Networks (CNNs) have displayed better performance in the classification of images for medical systems. However, CNN and conventional standardized classifiers suffer limitations in their performance due to a few reliability concerns, such as overfitting issues, feature extraction inefficiencies, and computational complexity. Therefore, a novel approach to medical image classification is proposed in this paper employing a three-tiered model that differs from conventional frameworks of multi-class classification to overcome these problems. In the first tier, the preparation of data includes the collection and transformation of five various clinical types of datasets such as Octoscope, Skin Cancer (PAD-UFES-20), The Kvasir dataset, Covid-19 dataset, and Chest X-Ray Images (Pneumonia). The stage of pre-processing guarantees the raw data is cleansed and organized for efficient analysis and training. In the second tier, sophisticated feature extraction utilizes a Multi-head Self-attention Progressive Learning Network on pre-processed data. The mechanism of Multi-head Self-attention and the techniques of Progressive Learning are leveraged to improve feature extraction, providing superior performance than traditional methods. In the third tier, the classification of features that are extracted is performed through Inception Residual Network-VGG19 (IRNet-VGG19), which combines the strengths of both Inception modules and the deep architecture of VGG19 to upgrade the accuracy of classification further. By evaluating all five datasets, the performance of IRNet-VGG19 shows better classification outcomes when compared with other existing approaches. The accuracies of classification on five different datasets are achieved as 0.993, 0.966, 0.994, 0.984, and 0.968 respectively, outperforming other challenging methods.
doi_str_mv 10.1007/s12530-024-09647-9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3141268242</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3141268242</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-722162ba9844a9f00051a34edff7ee7279b1b747e2e104ecb6b4e87d6b4b9dc83</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EElXhB1hZYm0YO25ss0NVKZXKQ7y2lpNMSkoexU6L-HtcimDHakaje-_MHEJOOJxxAHUeuBglwEBIBiaVipk9MuA61SyVOt3_7ZU-JMchLAFAcAkg1YC4yaarN1W7oA0WVe5qWjVugTSvXQhVGSd91bUX1NH-1SOyvkKPBS29a_Cj828075qsarcBN4_38xZ76tqCzh5usWcv0yk3R-SgdHXA4586JM9Xk6fxNZvfTWfjyznLBUDPlBA8FZkzWkpnynjjiLtEYlGWClEJZTKeKalQIAeJeZZmErUqYslMketkSE53uSvfva8x9HbZrX0bV9qESy5SLaSIKrFT5b4LwWNpVz5-7D8tB7ulaXc0baRpv2laE03JzhSiuF2g_4v-x_UFKvN24w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3141268242</pqid></control><display><type>article</type><title>Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19</title><source>Springer Nature - Complete Springer Journals</source><creator>Annapoorani, G. ; Manikandan, P. ; Genitha, C. Heltin</creator><creatorcontrib>Annapoorani, G. ; Manikandan, P. ; Genitha, C. Heltin</creatorcontrib><description>Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential task. Convolutional Neural Networks (CNNs) have displayed better performance in the classification of images for medical systems. However, CNN and conventional standardized classifiers suffer limitations in their performance due to a few reliability concerns, such as overfitting issues, feature extraction inefficiencies, and computational complexity. Therefore, a novel approach to medical image classification is proposed in this paper employing a three-tiered model that differs from conventional frameworks of multi-class classification to overcome these problems. In the first tier, the preparation of data includes the collection and transformation of five various clinical types of datasets such as Octoscope, Skin Cancer (PAD-UFES-20), The Kvasir dataset, Covid-19 dataset, and Chest X-Ray Images (Pneumonia). The stage of pre-processing guarantees the raw data is cleansed and organized for efficient analysis and training. In the second tier, sophisticated feature extraction utilizes a Multi-head Self-attention Progressive Learning Network on pre-processed data. The mechanism of Multi-head Self-attention and the techniques of Progressive Learning are leveraged to improve feature extraction, providing superior performance than traditional methods. In the third tier, the classification of features that are extracted is performed through Inception Residual Network-VGG19 (IRNet-VGG19), which combines the strengths of both Inception modules and the deep architecture of VGG19 to upgrade the accuracy of classification further. By evaluating all five datasets, the performance of IRNet-VGG19 shows better classification outcomes when compared with other existing approaches. The accuracies of classification on five different datasets are achieved as 0.993, 0.966, 0.994, 0.984, and 0.968 respectively, outperforming other challenging methods.</description><identifier>ISSN: 1868-6478</identifier><identifier>EISSN: 1868-6486</identifier><identifier>DOI: 10.1007/s12530-024-09647-9</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accuracy ; Artificial Intelligence ; Artificial neural networks ; Big Data ; Classification ; Complex Systems ; Complexity ; Datasets ; Deep learning ; Digital imaging ; Efficiency ; Engineering ; Feature extraction ; Image classification ; Image processing ; Learning ; Medical electronics ; Medical imaging ; Original Paper ; Performance evaluation ; System reliability</subject><ispartof>Evolving systems, 2025-02, Vol.16 (1), p.19, Article 19</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>Copyright Springer Nature B.V. Feb 2025</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-722162ba9844a9f00051a34edff7ee7279b1b747e2e104ecb6b4e87d6b4b9dc83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s12530-024-09647-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s12530-024-09647-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Annapoorani, G.</creatorcontrib><creatorcontrib>Manikandan, P.</creatorcontrib><creatorcontrib>Genitha, C. Heltin</creatorcontrib><title>Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19</title><title>Evolving systems</title><addtitle>Evolving Systems</addtitle><description>Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential task. Convolutional Neural Networks (CNNs) have displayed better performance in the classification of images for medical systems. However, CNN and conventional standardized classifiers suffer limitations in their performance due to a few reliability concerns, such as overfitting issues, feature extraction inefficiencies, and computational complexity. Therefore, a novel approach to medical image classification is proposed in this paper employing a three-tiered model that differs from conventional frameworks of multi-class classification to overcome these problems. In the first tier, the preparation of data includes the collection and transformation of five various clinical types of datasets such as Octoscope, Skin Cancer (PAD-UFES-20), The Kvasir dataset, Covid-19 dataset, and Chest X-Ray Images (Pneumonia). The stage of pre-processing guarantees the raw data is cleansed and organized for efficient analysis and training. In the second tier, sophisticated feature extraction utilizes a Multi-head Self-attention Progressive Learning Network on pre-processed data. The mechanism of Multi-head Self-attention and the techniques of Progressive Learning are leveraged to improve feature extraction, providing superior performance than traditional methods. In the third tier, the classification of features that are extracted is performed through Inception Residual Network-VGG19 (IRNet-VGG19), which combines the strengths of both Inception modules and the deep architecture of VGG19 to upgrade the accuracy of classification further. By evaluating all five datasets, the performance of IRNet-VGG19 shows better classification outcomes when compared with other existing approaches. The accuracies of classification on five different datasets are achieved as 0.993, 0.966, 0.994, 0.984, and 0.968 respectively, outperforming other challenging methods.</description><subject>Accuracy</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Big Data</subject><subject>Classification</subject><subject>Complex Systems</subject><subject>Complexity</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Digital imaging</subject><subject>Efficiency</subject><subject>Engineering</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image processing</subject><subject>Learning</subject><subject>Medical electronics</subject><subject>Medical imaging</subject><subject>Original Paper</subject><subject>Performance evaluation</subject><subject>System reliability</subject><issn>1868-6478</issn><issn>1868-6486</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EElXhB1hZYm0YO25ss0NVKZXKQ7y2lpNMSkoexU6L-HtcimDHakaje-_MHEJOOJxxAHUeuBglwEBIBiaVipk9MuA61SyVOt3_7ZU-JMchLAFAcAkg1YC4yaarN1W7oA0WVe5qWjVugTSvXQhVGSd91bUX1NH-1SOyvkKPBS29a_Cj828075qsarcBN4_38xZ76tqCzh5usWcv0yk3R-SgdHXA4586JM9Xk6fxNZvfTWfjyznLBUDPlBA8FZkzWkpnynjjiLtEYlGWClEJZTKeKalQIAeJeZZmErUqYslMketkSE53uSvfva8x9HbZrX0bV9qESy5SLaSIKrFT5b4LwWNpVz5-7D8tB7ulaXc0baRpv2laE03JzhSiuF2g_4v-x_UFKvN24w</recordid><startdate>20250201</startdate><enddate>20250201</enddate><creator>Annapoorani, G.</creator><creator>Manikandan, P.</creator><creator>Genitha, C. Heltin</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope></search><sort><creationdate>20250201</creationdate><title>Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19</title><author>Annapoorani, G. ; Manikandan, P. ; Genitha, C. Heltin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-722162ba9844a9f00051a34edff7ee7279b1b747e2e104ecb6b4e87d6b4b9dc83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Accuracy</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Big Data</topic><topic>Classification</topic><topic>Complex Systems</topic><topic>Complexity</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Digital imaging</topic><topic>Efficiency</topic><topic>Engineering</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image processing</topic><topic>Learning</topic><topic>Medical electronics</topic><topic>Medical imaging</topic><topic>Original Paper</topic><topic>Performance evaluation</topic><topic>System reliability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Annapoorani, G.</creatorcontrib><creatorcontrib>Manikandan, P.</creatorcontrib><creatorcontrib>Genitha, C. Heltin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>Evolving systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Annapoorani, G.</au><au>Manikandan, P.</au><au>Genitha, C. Heltin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19</atitle><jtitle>Evolving systems</jtitle><stitle>Evolving Systems</stitle><date>2025-02-01</date><risdate>2025</risdate><volume>16</volume><issue>1</issue><spage>19</spage><pages>19-</pages><artnum>19</artnum><issn>1868-6478</issn><eissn>1868-6486</eissn><abstract>Classification of images is an important process in the revolution of big data in healthcare. For classification and diagnosis, several developments have considerably improved digital clinical image processing. In many applications of medical imaging, medical image classification is a very essential task. Convolutional Neural Networks (CNNs) have displayed better performance in the classification of images for medical systems. However, CNN and conventional standardized classifiers suffer limitations in their performance due to a few reliability concerns, such as overfitting issues, feature extraction inefficiencies, and computational complexity. Therefore, a novel approach to medical image classification is proposed in this paper employing a three-tiered model that differs from conventional frameworks of multi-class classification to overcome these problems. In the first tier, the preparation of data includes the collection and transformation of five various clinical types of datasets such as Octoscope, Skin Cancer (PAD-UFES-20), The Kvasir dataset, Covid-19 dataset, and Chest X-Ray Images (Pneumonia). The stage of pre-processing guarantees the raw data is cleansed and organized for efficient analysis and training. In the second tier, sophisticated feature extraction utilizes a Multi-head Self-attention Progressive Learning Network on pre-processed data. The mechanism of Multi-head Self-attention and the techniques of Progressive Learning are leveraged to improve feature extraction, providing superior performance than traditional methods. In the third tier, the classification of features that are extracted is performed through Inception Residual Network-VGG19 (IRNet-VGG19), which combines the strengths of both Inception modules and the deep architecture of VGG19 to upgrade the accuracy of classification further. By evaluating all five datasets, the performance of IRNet-VGG19 shows better classification outcomes when compared with other existing approaches. The accuracies of classification on five different datasets are achieved as 0.993, 0.966, 0.994, 0.984, and 0.968 respectively, outperforming other challenging methods.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s12530-024-09647-9</doi></addata></record>
fulltext fulltext
identifier ISSN: 1868-6478
ispartof Evolving systems, 2025-02, Vol.16 (1), p.19, Article 19
issn 1868-6478
1868-6486
language eng
recordid cdi_proquest_journals_3141268242
source Springer Nature - Complete Springer Journals
subjects Accuracy
Artificial Intelligence
Artificial neural networks
Big Data
Classification
Complex Systems
Complexity
Datasets
Deep learning
Digital imaging
Efficiency
Engineering
Feature extraction
Image classification
Image processing
Learning
Medical electronics
Medical imaging
Original Paper
Performance evaluation
System reliability
title Evolving medical image classification: a three-tiered framework combining MSPLnet and IRNet-VGG19
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T12%3A26%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evolving%20medical%20image%20classification:%20a%20three-tiered%20framework%20combining%20MSPLnet%20and%20IRNet-VGG19&rft.jtitle=Evolving%20systems&rft.au=Annapoorani,%20G.&rft.date=2025-02-01&rft.volume=16&rft.issue=1&rft.spage=19&rft.pages=19-&rft.artnum=19&rft.issn=1868-6478&rft.eissn=1868-6486&rft_id=info:doi/10.1007/s12530-024-09647-9&rft_dat=%3Cproquest_cross%3E3141268242%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3141268242&rft_id=info:pmid/&rfr_iscdi=true