Deep learning-based correction of defocused fringe patterns for high-speed 3D measurement

Digital fringe projection profilometry often faces a trade-off between measurement accuracy and efficiency. Defocus technology is commonly employed to address this challenge and improve the efficiency of high-speed three-dimensional (3D) measurement. This technology uses 1-bit binary fringe patterns...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Advanced engineering informatics 2023-10, Vol.58, p.102221, Article 102221
Hauptverfasser: Hou, Lei, Xi, Dejun, Luo, Jun, Qin, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Digital fringe projection profilometry often faces a trade-off between measurement accuracy and efficiency. Defocus technology is commonly employed to address this challenge and improve the efficiency of high-speed three-dimensional (3D) measurement. This technology uses 1-bit binary fringe patterns instead of traditional 8-bit sinusoidal patterns, but accurately measuring 3D shapes with both high speed and accuracy remains a challenge due to defocus errors. These errors are introduced by the manual adjustment of lens focal length and reduce both fringe pattern quality and measurement accuracy. To overcome this limitation, we propose a multi-stage generative adversarial network with a self-attention mechanism to correct inaccurate fringe patterns and transform them into more ideal sinusoidal fringe patterns. Our generation network comprises a multi-stage feature extraction network with a self-attention mechanism and an encoder-decoder network. A multi-stage network integrating residual and transformer modules is constructed to mine global feature information. The self-attention mechanism accurately detects key areas for correction, and the encoder-decoder network generates rectified sinusoidal fringe patterns by combining the feature information with the attention area. We use a discriminative network to evaluate whether the output of the generative network is good enough to be true. In our experiments, we considered different fringe widths and measured objects of various types and colors. The results show that our proposed method improves the quality of defocus fringe patterns and the accuracy of subsequent 3D reconstruction compared to existing direct defocus methods.
ISSN:1474-0346
1873-5320
DOI:10.1016/j.aei.2023.102221