EFAM-Net: A Multi-Class Skin Lesion Classification Model Utilizing Enhanced Feature Fusion and Attention Mechanisms
Skin cancer caused by common malignant tumors is a major threat to the health of patients. Automated classification of skin lesions using computer algorithms is crucial for enhancing diagnostic efficiency and reducing mortality rates associated with skin cancer. Enhancing the capabilities of image c...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024, Vol.12, p.143029-143041 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Skin cancer caused by common malignant tumors is a major threat to the health of patients. Automated classification of skin lesions using computer algorithms is crucial for enhancing diagnostic efficiency and reducing mortality rates associated with skin cancer. Enhancing the capabilities of image classification models for skin lesions is essential to assist in accurately classifying skin diseases of patients. Aiming at this goal, a novel EFAM-Net model is proposed in this paper for the skin lesion classification task. Firstly, a newly designed Attention Residual Learning ConvNeXt (ARLC) block is used to extract low-level features such as colors and textures in images. Then, the deep-layer blocks of the network are replaced with a newly designed Parallel ConvNeXt (PCNXt) block, allowing to capture richer and more complex features. Additionally, another newly designed Multi-scale Efficient Attention Feature Fusion (MEAFF) block enhances feature extraction at various scales, allowing the model to effectively capture more comprehensive features in specific layers, fuse feature maps of different scales and enhance feature reuse at the end. EFAM-Net is experimentally evaluated on the ISIC 2019 and HAM10000 public datasets, as well as on a private dataset. The obtained results show that EFAM-Net achieves top classification performance among all compared models, by achieving overall accuracy of 92.30%, 93.95%, and 94.31% on the ISIC 2019, HAM10000, and private dataset, respectively. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3468612 |