SPCNet: a strip pyramid ConvNeXt network for detection of road surface defects

Road surface defect detection plays an important role in the construction and maintenance of roads. However, the irregularity of road surface defects and the complexity of the background make the extraction of road surface defects very difficult. It is a challenge to extract the road surface defects...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing image and video processing, 2024-02, Vol.18 (1), p.37-45
Hauptverfasser: Zhou, Ziang, Zhao, Wensong, Li, Jun, Song, Kechen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Road surface defect detection plays an important role in the construction and maintenance of roads. However, the irregularity of road surface defects and the complexity of the background make the extraction of road surface defects very difficult. It is a challenge to extract the road surface defects accurately. To cope with this challenge, we introduce the theory of image segmentation in deep learning. However, existing deep learning networks suffer from insufficient segmentation accuracy, low model robustness, and a lack of generalization ability. Consequently, we propose a novel deep learning network named Strip Pyramid ConvNeXt Network for detecting road surface defects. Firstly, we introduced ConvNeXt as the encoder to ensure the segmentation accuracy of the model. Furthermore, we designed a strip pyramid pooling module with excellent edge detail extraction capability and a multi-feature fusion module. We also created a cementation fissure dataset (CE dataset) to test the accuracy of the model and verify the generalization capability and robustness of the model. Finally, we compared our model with ten advanced segmentation networks in recent years on CRACK500 dataset, GAPs384 dataset, and cementation fissure dataset (CE dataset), and our model outperforms others on four metrics.
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-023-02698-6