A two‐stream deep neural network‐based intelligent system for complex skin cancer types classification

Medical imaging systems installed in different hospitals and labs generate images in bulk, which could support medics to analyze infections or injuries. Manual inspection becomes difficult when there exist more images, therefore, intelligent systems are usually required for real‐time diagnosis. Mela...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of intelligent systems 2022-12, Vol.37 (12), p.10621-10649
Hauptverfasser: Attique Khan, Muhammad, Sharif, Muhammad, Akram, Tallha, Kadry, Seifedine, Hsu, Ching‐Hsien
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Medical imaging systems installed in different hospitals and labs generate images in bulk, which could support medics to analyze infections or injuries. Manual inspection becomes difficult when there exist more images, therefore, intelligent systems are usually required for real‐time diagnosis. Melanoma is one of the most common and severe forms of skin cancer that begins from the cells beneath the skin. Through dermoscopic images, it is possible to diagnose the infection at the early stages. In this regard, different approaches have been exploited for improved results. In this study, we propose a two‐stream deep neural network information fusion framework for multiclass skin cancer classification. The proposed technique follows two streams: initially, a fusion‐based contrast enhancement technique is proposed, which feeds enhanced images to the pretrained DenseNet201 architecture. The extracted features are later optimized using a skewness‐controlled moth–flame optimization algorithm. In the second stream, deep features from the fine‐tuned MobileNetV2 pretrained network are extracted and down‐sampled using the proposed feature selection framework. Finally, most discriminant features from both networks are fused using a new parallel multimax coefficient correlation method. A multiclass extreme learning machine classifier is used to classify lesion images. The testing process is initiated on three imbalanced skin data sets—HAM10000, ISBI2018, and ISIC2019. The simulations are performed without performing any data augmentation step in achieving an accuracy of 96.5%, 98%, and 89%, respectively. A fair comparison with the existing techniques reveals the improved performance of our proposed algorithm.
ISSN:0884-8173
1098-111X
DOI:10.1002/int.22691