S 2 C-DeLeNet: A parameter transfer based segmentation-classification integration for detecting skin cancer lesions from dermoscopic images

Dermoscopic images ideally depict pigmentation attributes on the skin surface which is highly regarded in the medical community for detection of skin abnormality, disease or even cancer. The identification of such abnormality, however, requires trained eyes and accurate detection necessitates the pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in biology and medicine 2022-11, Vol.150, p.106148, Article 106148
Hauptverfasser: Alam, Md Jahin, Mohammad, Mir Sayeed, Hossain, Md Adnan Faisal, Showmik, Ishtiaque Ahmed, Raihan, Munshi Sanowar, Ahmed, Shahed, Mahmud, Talha Ibn
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Dermoscopic images ideally depict pigmentation attributes on the skin surface which is highly regarded in the medical community for detection of skin abnormality, disease or even cancer. The identification of such abnormality, however, requires trained eyes and accurate detection necessitates the process being time-intensive. As such, computerized detection schemes have become quite an essential, especially schemes which adopt deep learning tactics. In this paper, a convolutional deep neural network, S C-DeLeNet, is proposed, which (i) Performs segmentation procedure of lesion based regions with respect to the unaffected skin tissue from dermoscopic images using a segmentation sub-network, (ii) Classifies each image based on its medical condition type utilizing transferred parameters from the inherent segmentation sub-network. The architecture of the segmentation sub-network contains EfficientNet-B4 backbone in place of the encoder and the classification sub-network bears a 'Classification Feature Extraction' system which pulls trained segmentation feature maps towards lesion prediction. Inside the classification architecture, there have been designed, (i) A 'Feature Coalescing Module' in order to trail and mix each dimensional feature from both encoder and decoder, (ii) A '3D-Layer Residuals' block to create a parallel pathway of low-dimensional features with high variance for better classification. After fine-tuning on a publicly accessible dataset, a mean dice-score of 0.9494 during segmentation is procured which beats existing segmentation strategies and a mean accuracy of 0.9103 is obtained for classification which outperforms conventional and noted classifiers. Additionally, the already fine-tuned network demonstrates highly satisfactory results on other skin cancer segmentation datasets while cross-inference. Extensive experimentation is done to prove the efficacy of the network for not only dermoscopic images but also different medical modalities; which can show its potential in being a systematic diagnostic solution in the field of dermatology and possibly more.
ISSN:0010-4825
1879-0534
DOI:10.1016/j.compbiomed.2022.106148