Learning Visual Similarity for Inspecting Defective Railway Fasteners

Vision-based automatic railway fastener inspection, instead of manual operation, remains a great challenge. Even though many supervised learning-based methods have been developed, expensive training labels and imbalanced data are the main obstacles to leverage the performance of the fastener inspect...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2019-08, Vol.19 (16), p.6844-6857
Hauptverfasser: Liu, Junbo, Huang, Yaping, Zou, Qi, Tian, Mei, Wang, Shengchun, Zhao, Xinxin, Dai, Peng, Ren, Shengwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Vision-based automatic railway fastener inspection, instead of manual operation, remains a great challenge. Even though many supervised learning-based methods have been developed, expensive training labels and imbalanced data are the main obstacles to leverage the performance of the fastener inspection task. To tackle the problems, we present a novel vision-based fastener inspection system (VFIS) which is inspired by few-shot learning. VFIS can automatically collect and annotate a large number of fastener samples by using the proposed online template matching-based classification method, and it only requires a very small number of annotated fastener templates. Moreover, we employ a similarity-based deep network to solve the problem of the imbalanced dataset. The comprehensive experiments are conducted on a large scale fastener dataset. VFIS yields competitive performance on both fastener localization and fastener classification. Specifically, an average detection rate of 99.36% is achieved for fastener localization, and an average accuracy of 92.69% is achieved for fastener classification. Moreover, for identifying defective fasteners, our proposed method achieves an average precision of 92.63% and an average recall of 92.88%.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2019.2911015