PDLFBR-Net: Partial Decoder Localization and Foreground-Background Refinement Network for Polyp Segmentation

Polyp segmentation is vital for early detection and treatment of colorectal cancer, significantly improving patient prognosis. This paper proposes an efficient and precise polyp segmentation model called the Partial Decoder Localization and Foreground-Background Refinement Network (PDLFBR-Net), whic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.114280-114294
Hauptverfasser: Peng, Yanbin, Feng, Mingkun, Zhai, Zhinian, Zheng, Zhijun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Polyp segmentation is vital for early detection and treatment of colorectal cancer, significantly improving patient prognosis. This paper proposes an efficient and precise polyp segmentation model called the Partial Decoder Localization and Foreground-Background Refinement Network (PDLFBR-Net), which simulates the human object recognition process. Specifically, PDLFBR-Net comprises three key modules: the Cross-level Attention-enhanced Fusion Module (CAFM), the Position Recognition Module (PRM), and the Foreground-Background Refinement Module (FBRM). The CAFM enhances feature representation by fusing information from adjacent levels, providing more discriminative features. The PRM module simulates the human recognition process by using a partial decoder to locate potential polyp tissues from a global perspective. Subsequently, the FBRM is used to perform specific recognition, gradually refining the initial prediction results through foreground and background focusing to achieve precise recognition. Extensive experiments demonstrate that the proposed PDLFBR-Net model significantly outperforms existing state-of-the-art models on five challenging datasets. On the Kvasir-SEG benchmark dataset, the mean Dice and mean IoU values reached 93.7% and 89.5%, respectively, which represents an improvement of 0.4% and 0.6% compared to the best-performing state-of-the-art (SOTA) method.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3445428