A Color and Depth Image Defect Segmentation Framework for EEE Reuse and Recycling

Defect inspection is a crucial step in evaluating the reuse potential for Electrical and Electronic Equipment (EEE). Manual defect inspection demands operators with experience, and is subjective, which highlights the necessity for automatic defect detection. However, the accuracy of color image-base...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Yifan, Zhou, Chuangchuang, Sterkens, Wouter, Piessens, Mathijs, De Marelle, Dieter, Dewulf, Wim, Peeters, Jef
Format: Tagungsbericht
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Defect inspection is a crucial step in evaluating the reuse potential for Electrical and Electronic Equipment (EEE). Manual defect inspection demands operators with experience, and is subjective, which highlights the necessity for automatic defect detection. However, the accuracy of color image-based defect segmentation is known to be limited, as defects and backgrounds often have similar textures and colors. In addition, the reflective properties of commonly used materials in EEE complicate defect detection. Furthermore, manual segmentation labeling is highly time-consuming. Therefore, in this paper, an automatic defect segmentation pipeline for color (RGB) and Depth (D) image fusion employing deep learning is presented. To simplify the annotation process, multiple-angle images of the same device are captured using an integrated RGBD camera and only bounding box annotations are made. Then, these RGB and D images are processed by the developed framework for simultaneous defect detection and instance segmentation. The modular design of this framework allows free choice of fusion methods (addition, concatenation, Cross-attention Module, etc.), as well as the use of different backbones, and prediction heads, and provides the predicted bounding boxes for the segment anything (SAM) family. Finally, the proposed method is evaluated on a laptop dataset consisting of 513 RGB and corresponding depth images in which two categories of missing batteries and missing pieces are annotated. Results show that the RGBD-based Faster-RCNN-FPN with SAM outperforms the RGB-based method, highlighting the potential for automated RGBD defect segmentation for EEE reuse and recycling using deep learning and data fusion techniques.