A deep convolutional network combining layerwise images and defect parameter vectors for laser powder bed fusion process anomalies classification

Defect detection is an essential way to ensure the quality of parts made by laser powder bed fusion (LPBF) and industrial cameras are one of the commonly used tools for defect monitoring. Different lighting environments affect the visibility of defects in the images, and the illumination condition b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent manufacturing 2024-08, Vol.35 (6), p.2929-2959
Hauptverfasser: Jiang, Zimeng, Zhang, Aoming, Chen, Zhangdong, Ma, Chenguang, Yuan, Zhenghui, Deng, Yifan, Zhang, Yingjie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Defect detection is an essential way to ensure the quality of parts made by laser powder bed fusion (LPBF) and industrial cameras are one of the commonly used tools for defect monitoring. Different lighting environments affect the visibility of defects in the images, and the illumination condition becomes one of the most important factors affecting the defect detection effect of industrial cameras, but the modification of the equipment lighting environment will increase the complexity and cost of monitoring. In this study, only an off-axis CMOS camera monitoring system is used and the lighting facilities are not changed to improve the effect of defect detection under uneven lighting conditions. A dual-input convolutional neural network fusing defect parameter vectors and layerwise images is proposed for real-time online monitoring of defects in the LPBF process using a paraxial CMOS camera monitoring system. The model integrates the image and the parameter information related to defect generation, and can distinguish some defects that are not easily discerned by images alone. To a certain extent, it avoids the problem that the same defects are visually indistinguishable in images caused by uneven light distribution and reflections on metal surfaces. The results indicate that the method has better performance than the method with a single image input, with recognition accuracies above 80.00% for all defect categories. In addition, the method is more suitable for real-time online monitoring scenarios due to its low parameter number, short training time and fast prediction speed compared to classical deep learning algorithms.
ISSN:0956-5515
1572-8145
DOI:10.1007/s10845-023-02183-4