Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation
•A novel deep neural network, titled Multi-scale Residual Encoding and Decoding network (Ms RED), is proposed for skin lesion segmentation.•A pair of multi-scale residual feature fusion modules is further proposed to collectively augment the capability of the overall network in exploiting multi-scal...
Gespeichert in:
Veröffentlicht in: | Medical image analysis 2022-01, Vol.75, p.102293-102293, Article 102293 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A novel deep neural network, titled Multi-scale Residual Encoding and Decoding network (Ms RED), is proposed for skin lesion segmentation.•A pair of multi-scale residual feature fusion modules is further proposed to collectively augment the capability of the overall network in exploiting multi-scale perceptual context and characteristics.•A novel Multi-scale, Multi-channel Feature Fusion module is additionally proposed, which is able to further boost Ms RED capability in hierarchically extracting and adaptively learning from perceptual characteristics latent in an input image.
[Display omitted]
Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters t |
---|---|
ISSN: | 1361-8415 1361-8423 |
DOI: | 10.1016/j.media.2021.102293 |