Data augmentation for skin lesion using self-attention based progressive generative adversarial network

While recent years have witnessed the remarkable success of deep learning methods in automated skin lesion detection systems, there still exists a gap between manual assessment of experts and automated evaluation of computers. The reason behind such a gap is the deep learning models demand considera...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2021-03, Vol.165, p.113922, Article 113922
Hauptverfasser: Abdelhalim, Ibrahim Saad Aly, Mohamed, Mamdouh Farouk, Mahdy, Yousef Bassyouni
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:While recent years have witnessed the remarkable success of deep learning methods in automated skin lesion detection systems, there still exists a gap between manual assessment of experts and automated evaluation of computers. The reason behind such a gap is the deep learning models demand considerable amounts of data, while the availability of annotated images is often limited. Data Augmentation (DA) is one way to mitigate the lack of labeled data; however, the augmented images intrinsically have a similar distribution to the original ones, leading to limited performance improvement. To satisfy the data lack in the real image distribution, we synthesize skin lesion images – realistic but completely different from the original ones – using Generative Adversarial Networks (GANs). In this paper, we propose the Self-attention Progressive Growing of GANs (SPGGANs) to generate fine-grained 256 × 256 skin lesion images for Convolutional Neural Network-based melanoma detection, which is challenging via conventional GANs; difficulties arise due to unstable GAN training with high resolution and a variety of skin lesions in size, shape, and location. In SPGGAN, details can be generated using aggregated information from all feature locations. Moreover, the discriminator can monitor that highly detailed features in distant portions of the image are consistent with each other. Furthermore, the Two-Timescale Update Rule (TTUR) is applied to SPGGAN (SPGGAN-TTUR) to improve stability while generating 256 × 256 skin lesion images. SPGGAN-TTUR is evaluated on data generation and classification tasks using the HAM10000 dataset. Our results confirm the importance of our proposed GAN-based DA approach for training skin lesion classifiers and indicate that it can lead to statistically significant improvements (p-value
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113922