IMFF-Net: An integrated multi-scale feature fusion network for accurate retinal vessel segmentation from fundus images
•A network for precise segmentation of retinal vessels called IMFF-Net is proposed.•An Attention Pooling Feature block is proposed to reduce spatial information loss stemming from multiple pooling operations.•An UpSampling and DownSampling Feature Fusion block is proposed to segment the fine structu...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2024-05, Vol.91, p.105980, Article 105980 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A network for precise segmentation of retinal vessels called IMFF-Net is proposed.•An Attention Pooling Feature block is proposed to reduce spatial information loss stemming from multiple pooling operations.•An UpSampling and DownSampling Feature Fusion block is proposed to segment the fine structure of the retina.•The IMFF-Net proposed in this paper achieves good results on all three public datasets.
Extracting vascular structures from retinal fundus images plays a critical role in the early diagnosis and long-term treatment of ophthalmic diseases. Traditional manual segmentation of retinal vessels is a time-consuming process that demands a high level of expertise. In recent years, deep learning has made significant strides in retinal vessel segmentation, but it still faces certain challenges in fine vessel segmentation, such as the loss of spatial information resulting from multi-level feature extraction and the blurring of fine structural segmentation. To address these issues, we propose a multi-scale feature fusion segmentation network known as IMFF-Net. Specifically, we propose two fusion blocks in the IMFF-Net. Firstly, an Attention Pooling Feature Fusion (APF) block is proposed, which consists of Max Pooling, and Average Pooling and incorporates the SE block. This design effectively mitigates the problem of spatial information loss stemming from multiple pooling operations. Secondly, the Upsampling and Downsampling Feature Fusion block (UDFF) is proposed to weightedly merge the feature maps of each downsampling with the upsampling feature maps, thereby facilitating the precise segmentation of fine structures. To validate the performance of the proposed IMFF-Net, we conducted experiments on three retinal blood vessel segmentation datasets: DRIVE, STARE, and CHASE_DB1. IMFF-Net achieved outstanding results on the test set of these three public datasets with accuracies of 0.9621, 0.9707, and 0.9730, and sensitivities of 0.8575, 0.8634, and 0.8048, respectively. These results demonstrate the superior performance of IMFF-Net compared to the backbone network and other state-of-the-art methods. Our code is available at: https://github.com/wangyunyuwyy/IMFF-Net. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2024.105980 |