DilUnet: A U-net based architecture for blood vessels segmentation

•Dilated convolutions ensure better feature transfer and accurate classification that result in a sensitivity boost.•Dilated convolutions serve as the main constituents of decoder, encoder blocks and skip connections.•Dilated convolutions of different rates are more effective at capturing dynamicall...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer methods and programs in biomedicine 2022-05, Vol.218, p.106732-106732, Article 106732
Hauptverfasser: Hussain, Snawar, Guo, Fan, Li, Weiqing, Shen, Ziqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Dilated convolutions ensure better feature transfer and accurate classification that result in a sensitivity boost.•Dilated convolutions serve as the main constituents of decoder, encoder blocks and skip connections.•Dilated convolutions of different rates are more effective at capturing dynamically complex blood vessels.•Proposed Weighted multi-output fusion retrieves high fidelity vascular map by extracting critical features from each output block. Retinal image segmentation can help clinicians detect pathological disorders by studying changes in retinal blood vessels. This early detection can help prevent blindness and many other vision impairments. So far, several supervised and unsupervised methods have been proposed for the task of automatic blood vessel segmentation. However, the sensitivity and the robustness of these methods can be improved by correctly classifying more vessel pixels. We proposed an automatic, retinal blood vessel segmentation method based on the U-net architecture. This end-to-end framework utilizes preprocessing and a data augmentation pipeline for training. The architecture utilizes multiscale input and multioutput modules with improved skip connections and the correct use of dilated convolutions for effective feature extraction. In multiscale input, the input image is scaled down and concatenated with the output of convolutional blocks at different points in the encoder path to ensure the feature transfer of the original image. The multioutput module obtains upsampled outputs from each decoder block that are combined to obtain the final output. Skip paths connect each encoder block with the corresponding decoder block, and the whole architecture utilizes different dilation rates to improve the overall feature extraction. The proposed method achieved an accuracy: of 0.9680, 0.9694, and 0.9701; sensitivity of 0.8837, 0.8263, and 0.8713; and Intersection Over Union (IOU) of 0.8698, 0.7951, and 0.8184 with three publicly available datasets: DRIVE, STARE, and CHASE, respectively. An ablation study is performed to show the contribution of each proposed module and technique. The evaluation metrics revealed that the performance of the proposed method is higher than that of the original U-net and other U-net-based architectures, as well as many other state-of-the-art segmentation techniques, and that the proposed method is robust to noise.
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2022.106732