Robust monocular vision-based monitoring system for multi-target displacement measurement of bridges under complex backgrounds

[Display omitted] •A system configuration determination metric is proposed to balance FOVH and precision.•A background segmentation model based on the fusion of CNN-Transformer is proposed to enhance the system’s stability.•A novel method of rectifying the displacement error induced by the camera or...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mechanical systems and signal processing 2025-02, Vol.225, p.112242, Article 112242
Hauptverfasser: Zhu, Weizhu, Cui, Zurong, Chen, Lei, Zhou, Zhixiang, Chu, Xi, Zhu, Shifeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:[Display omitted] •A system configuration determination metric is proposed to balance FOVH and precision.•A background segmentation model based on the fusion of CNN-Transformer is proposed to enhance the system’s stability.•A novel method of rectifying the displacement error induced by the camera orientation is proposed.•The efficacy of the robust monocular vision-based monitoring system (RMVMS) is validated through a tied arch bridge. Vision-based multi-target monitoring systems for bridge structures provide a comprehensive evaluation of structural safety. However, their application to field bridges has been constrained by challenges such as the trade-off between the field of view (FOV) and accuracy, as well as the impact of camera orientation and complex backgrounds on measurement effectiveness. This study introduces a robust monocular vision-based monitoring system (RMVMS) for multi-target displacement measurement. First, a system configuration determination method is developed to achieve an effective balance between FOV and accuracy. Next, a hybrid network structure, ConvTransNet, is introduced to mitigate the impact of complex background disturbance. Additionally, a novel multi-target displacement transformation model (MDTM) is proposed to correct errors arising from camera orientation. Moreover, a boundary loss function and an RMSProp learning rate schedule were implemented during training, enabling ConvTransNet to achieve optimal performance with a P-R threshold of 0.45. A 4-meter laboratory-scale bridge model test demonstrated the superiority of ConvTransNet over existing segmentation models on a custom dataset formatted according to Pascal VOC 2012 standards. MDTM effectively reduced orientation-induced errors from 17.93 % to 1.53 %. The efficiency and robustness of RMVMS were further validated on a tied arch bridge, achieving RMSE and NRMSE below 0.162 mm and 3.63 %, respectively, confirming its capability for precise multi-target displacement monitoring in field applications.
ISSN:0888-3270
DOI:10.1016/j.ymssp.2024.112242