A Depth-Wise Separable U-Net Architecture with Multiscale Filters to Detect Sinkholes
Numerous variants of the basic deep segmentation model—U-Net—have emerged in recent years, achieving reliable performance across different benchmarks. In this paper, we propose an improved version of U-Net with higher performance and reduced complexity. This improvement was achieved by introducing a...
Gespeichert in:
Veröffentlicht in: | Remote sensing (Basel, Switzerland) Switzerland), 2023-03, Vol.15 (5), p.1384 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Numerous variants of the basic deep segmentation model—U-Net—have emerged in recent years, achieving reliable performance across different benchmarks. In this paper, we propose an improved version of U-Net with higher performance and reduced complexity. This improvement was achieved by introducing a sparsely connected depth-wise separable block with multiscale filters, enabling the network to capture features of different scales. The use of depth-wise separable convolution significantly reduces the number of trainable parameters, making the training faster, while reducing the risk of overfitting. We used our developed sinkhole dataset and the available benchmark nuclei dataset to assess the proposed model’s performance. Pixel-wise annotation is laborious and requires a great deal of human expertise; therefore, we propose a fully deep convolutional autoencoder network that utilizes the proposed block to automatically annotate the sinkhole dataset. Our segmentation model outperformed the state-of-the-art methods, including U-Net, Attention U-Net, Depth-Separable U-Net, and Inception U-Net, achieving an average improvement of 1.2% and 1.4%, respectively, on the sinkhole and the nuclei datasets, with 94% and 92% accuracy, as well as a reduced training time. It also achieved 83% and 80% intersection-over-union (IoU) on the two datasets, respectively, which is an 11.8% and 9.3% average improvement over the above-mentioned models. |
---|---|
ISSN: | 2072-4292 2072-4292 |
DOI: | 10.3390/rs15051384 |