Downstream Semantic Segmentation Model for Low-Level Surface Crack Detection
As surface crack detection is essential for roads and other building structures in most countries, this has been a very popular topic in computer vision for automating structural health monitoring. Recently, many deep learning engineers have attempted to find solutions to the problem. However, to th...
Gespeichert in:
Veröffentlicht in: | Advances in Multimedia 2022-05, Vol.2022, p.1-12 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As surface crack detection is essential for roads and other building structures in most countries, this has been a very popular topic in computer vision for automating structural health monitoring. Recently, many deep learning engineers have attempted to find solutions to the problem. However, to the best of our knowledge, most previous methods were about designing and experimenting with a deep learning model from scratch, which is highly technical and very time-consuming. This study proposes a new approach of using downstream models to accelerate the development of deep learning models for pixel-level crack detection. An off-the-shelf semantic segmentation model named DeepLabV3-ResNet101 is used as a base model and then experimented with different loss functions and training strategies. Our experimental results have revealed that the downstream models trained by the classic cross-entropy loss function cannot provide reasonable results in pixel-level crack detection. The most successful downstream model we found is trained by the focal loss function without using the pretrained weights that are accompanied by the base model. Our selected downstream model is generalized well across different test datasets and yields the optimal dataset scale F-measures of 84.49% on CrackTree260, 80.29% on CRKWH100, 72.55% on CrackLS315, and 75.72% on Stone331. |
---|---|
ISSN: | 1687-5680 1687-5699 |
DOI: | 10.1155/2022/3712289 |