EDRNet: Encoder-Decoder Residual Network for Salient Object Detection of Strip Steel Surface Defects
It is still a challenging task to detect the surface defects of strip steel due to its complex variations, including variable defect types, cluttered background, low contrast, and noise interference. The existing detection methods cannot effectively segment the defect objects from complex background...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2020-12, Vol.69 (12), p.9709-9719 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It is still a challenging task to detect the surface defects of strip steel due to its complex variations, including variable defect types, cluttered background, low contrast, and noise interference. The existing detection methods cannot effectively segment the defect objects from complex background and have poor real-time performance. To address these issues, we propose a novel saliency detection method based on Encoder-Decoder Residual network (EDRNet). In the encoder stage, we use a fully convolutional neural network to extract rich multilevel defect features and fuse the attention mechanism to accelerate the convergence of the model. Then in the decoder stage, we adopt the channels weighted block (CWB) and the residual decoder block (RDB) alternatively to integrate the spatial features of shallower layers and semantic features of deep layers and recover the predicted spatial saliency values step by step. Finally, we design the residual refinement structure with 1D filters (RRS_1D) to further optimize the coarse saliency map. Compared with the existing saliency detection methods, the deeply supervised EDRNet can accurately segment the complete defect objects with well-defined boundary and effectively filter out irrelevant background noise. The extensive experimental results prove that our method is consistently superior to the state-of-the-art methods with large margins and strong robustness, and the detection efficiency is at over 27 fps on a single GPU. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2020.3002277 |