Imaging-based crack detection on concrete surfaces using You Only Look Once network

The detection of cracks in concrete structures is a pivotal aspect in assessing structural robustness. Current inspection methods are subjective, relying on the inspector’s experience and mental focus. In this study, an ad hoc You Only Look Once version 2 object detector was applied to automatically...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Structural health monitoring 2021-03, Vol.20 (2), p.484-499
Hauptverfasser: Deng, Jianghua, Lu, Ye, Lee, Vincent Cheng-Siong
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The detection of cracks in concrete structures is a pivotal aspect in assessing structural robustness. Current inspection methods are subjective, relying on the inspector’s experience and mental focus. In this study, an ad hoc You Only Look Once version 2 object detector was applied to automatically detect concrete cracks from real-world images, which were taken from diverse concrete bridges and contaminated with handwriting scripts. A total of 3010 cropped images were used to generate the dataset, labelled for two different detection classes, that is, cracks and handwriting. The proposed network was then trained and tested using the generated image dataset. Three full-scale images that contained disturbing background information were used to evaluate the robustness of the trained detector. The influence of labelling handwriting as an object class for network training on the overall crack detection accuracy was assessed as well. The results of this study show that the You Only Look Once version 2 could automatically locate crack with bounding boxes from raw images, even with the presence of handwriting scripts. As a comparative study, the proposed network was also compared with faster region-based convolutional neural network. The results showed that You Only Look Once version 2 performed better in terms of both accuracy and inference speed.
ISSN:1475-9217
1741-3168
DOI:10.1177/1475921720938486