Automated reconstruction model of a cross‐sectional drawing from stereo photographs based on deep learning
This study presents a novel, deep‐learning‐based model for the automated reconstruction of a cross‐sectional drawing from stereo photographs. Targeted cross‐sections captured in stereo photographs are detected and translated into sectional drawings using faster region‐based convolutional neural netw...
Gespeichert in:
Veröffentlicht in: | Computer-aided civil and infrastructure engineering 2024-02, Vol.39 (3), p.383-405 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This study presents a novel, deep‐learning‐based model for the automated reconstruction of a cross‐sectional drawing from stereo photographs. Targeted cross‐sections captured in stereo photographs are detected and translated into sectional drawings using faster region‐based convolutional neural networks and Pix2Pix generative adversarial network. To address the challenge of perspective correction in the photographs, a novel camera pose optimization method is introduced and employed. This method eliminates the need for camera calibration and image matching, thereby offering greater flexibility in camera positioning and facilitating the use of telephoto lenses while avoiding image‐matching errors. Moreover, synthetic image datasets are used for training to facilitate the practical implementation of the proposed model in construction industry applications, considering the limited availability of open datasets in this field. The applicability of the proposed model was evaluated through experiments conducted on the cross‐sections of curtain wall components. The results demonstrated superior measurement accuracy, compared with those of current methods of laser scanning or camera‐based measurements for construction components. |
---|---|
ISSN: | 1093-9687 1467-8667 |
DOI: | 10.1111/mice.13083 |