Fine retinal vessel segmentation by combining Nest U-net and patch-learning
In the segmentation of retinal vessel, the details of retinal vessel cannot be segmented accurately, and the single-pixel fine retinal vessels were prone to be mistakenly recognized by the unsupervised and supervised approaches. In order to solve this problem, we proposed novel fine retinal vessel s...
Gespeichert in:
Veröffentlicht in: | Soft computing (Berlin, Germany) Germany), 2021-04, Vol.25 (7), p.5519-5532 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the segmentation of retinal vessel, the details of retinal vessel cannot be segmented accurately, and the single-pixel fine retinal vessels were prone to be mistakenly recognized by the unsupervised and supervised approaches. In order to solve this problem, we proposed novel fine retinal vessel segmentation by combining Nest U-net and patch-learning in this study. The special extraction strategy was designed to effectively generate massive training samples including fine retinal vessels, and these samples had a great advantage in the fine retinal vessel segmentation. Nest U-net which directly fast-forwarded high-resolution feature maps from the encoder to the decoder network was designed as a new image segmentation model. This model was trained by the
k
-fold cross-validation strategy, and the testing samples were predicted, and the final retinal vessel was reconstructed by the sequential reconstruction strategy. This proposed method was tested on the publicly available datasets DRIVE and STARE. Sensitivity (SE), specificity (SP), accuracy (ACC), area under each curve (AUC), F1-score, and jaccard similarity score (JSC) were adopted as evaluation metrics to prove the superiority of this proposed method. The results that we achieved on public datasets (DRIVE: SE = 0.8060, SP = 0.9869, ACC = 0.9512, AUC = 0.9748, F1-score = 0.7863; STARE: SE = 0.8230, SP = 0.9945, ACC = 0.9641, AUC = 0.9620, F1-score = 0.7947) were higher than other state-of-the-art methods. This proposed method can achieve state-of-the-art segmentation results in terms of visual quality and objective assessment. |
---|---|
ISSN: | 1432-7643 1433-7479 |
DOI: | 10.1007/s00500-020-05552-w |