Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time

This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogram...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Artificial life and robotics 2024-11, Vol.29 (4), p.546-556
Hauptverfasser: Nakamura, Keita, Baba, Keita, Watanobe, Yutaka, Hanari, Toshihide, Matsumoto, Taku, Imabuchi, Takashi, Kawabata, Kuniaki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 556
container_issue 4
container_start_page 546
container_title Artificial life and robotics
container_volume 29
creator Nakamura, Keita
Baba, Keita
Watanobe, Yutaka
Hanari, Toshihide
Matsumoto, Taku
Imabuchi, Takashi
Kawabata, Kuniaki
description This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.
doi_str_mv 10.1007/s10015-024-00966-3
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3121049812</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3121049812</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-8111bf56469b9614c956880f6c1788bd7f4613e097e6e81dd25e6a74cf6176f83</originalsourceid><addsrcrecordid>eNp9UMtKxDAUDaLgOPoDrgKuo0mTJulSBh8DA6LoOvRxM9OhbWqSIvMB_rfRCu7c3BfnnMs5CF0yes0oVTchVZYTmglCaSEl4UdowSQTRIlcHqdZcE7yrNCn6CyEPaVCUckX6HM9RNj6MrZuwM7ifupiO3aAGxgC4NG1Q8R156Ym4KoM0OCEgxDbvoxpGUtf9hDBB9wOeNy56JJYn07-gD_auMPPL7h2DWDrPPbQTHU7bNOlH6c4P01ScI5ObNkFuPjtS_R2f_e6eiSbp4f16nZD6ozSSDRjrLK5FLKoimSuLnKpNbWyZkrrqlFWSMaBFgokaNY0WQ6yVKK2kilpNV-iq1l39O59SjbM3k1-SC8NZxmjotAsS6hsRtXeheDBmtEnv_5gGDXfcZs5bpPiNj9xG55IfCaFBB624P-k_2F9AYuAhKI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3121049812</pqid></control><display><type>article</type><title>Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time</title><source>SpringerLink Journals</source><creator>Nakamura, Keita ; Baba, Keita ; Watanobe, Yutaka ; Hanari, Toshihide ; Matsumoto, Taku ; Imabuchi, Takashi ; Kawabata, Kuniaki</creator><creatorcontrib>Nakamura, Keita ; Baba, Keita ; Watanobe, Yutaka ; Hanari, Toshihide ; Matsumoto, Taku ; Imabuchi, Takashi ; Kawabata, Kuniaki</creatorcontrib><description>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</description><identifier>ISSN: 1433-5298</identifier><identifier>EISSN: 1614-7456</identifier><identifier>DOI: 10.1007/s10015-024-00966-3</identifier><language>eng</language><publisher>Tokyo: Springer Japan</publisher><subject>Accuracy ; Artificial Intelligence ; Computation ; Computation by Abstract Devices ; Computer Science ; Control ; High density ; Image reconstruction ; Mechatronics ; Original Article ; Parameter estimation ; Photogrammetry ; Robotics</subject><ispartof>Artificial life and robotics, 2024-11, Vol.29 (4), p.546-556</ispartof><rights>International Society of Artificial Life and Robotics (ISAROB) 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-8111bf56469b9614c956880f6c1788bd7f4613e097e6e81dd25e6a74cf6176f83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10015-024-00966-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10015-024-00966-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Nakamura, Keita</creatorcontrib><creatorcontrib>Baba, Keita</creatorcontrib><creatorcontrib>Watanobe, Yutaka</creatorcontrib><creatorcontrib>Hanari, Toshihide</creatorcontrib><creatorcontrib>Matsumoto, Taku</creatorcontrib><creatorcontrib>Imabuchi, Takashi</creatorcontrib><creatorcontrib>Kawabata, Kuniaki</creatorcontrib><title>Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time</title><title>Artificial life and robotics</title><addtitle>Artif Life Robotics</addtitle><description>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</description><subject>Accuracy</subject><subject>Artificial Intelligence</subject><subject>Computation</subject><subject>Computation by Abstract Devices</subject><subject>Computer Science</subject><subject>Control</subject><subject>High density</subject><subject>Image reconstruction</subject><subject>Mechatronics</subject><subject>Original Article</subject><subject>Parameter estimation</subject><subject>Photogrammetry</subject><subject>Robotics</subject><issn>1433-5298</issn><issn>1614-7456</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9UMtKxDAUDaLgOPoDrgKuo0mTJulSBh8DA6LoOvRxM9OhbWqSIvMB_rfRCu7c3BfnnMs5CF0yes0oVTchVZYTmglCaSEl4UdowSQTRIlcHqdZcE7yrNCn6CyEPaVCUckX6HM9RNj6MrZuwM7ifupiO3aAGxgC4NG1Q8R156Ym4KoM0OCEgxDbvoxpGUtf9hDBB9wOeNy56JJYn07-gD_auMPPL7h2DWDrPPbQTHU7bNOlH6c4P01ScI5ObNkFuPjtS_R2f_e6eiSbp4f16nZD6ozSSDRjrLK5FLKoimSuLnKpNbWyZkrrqlFWSMaBFgokaNY0WQ6yVKK2kilpNV-iq1l39O59SjbM3k1-SC8NZxmjotAsS6hsRtXeheDBmtEnv_5gGDXfcZs5bpPiNj9xG55IfCaFBB624P-k_2F9AYuAhKI</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Nakamura, Keita</creator><creator>Baba, Keita</creator><creator>Watanobe, Yutaka</creator><creator>Hanari, Toshihide</creator><creator>Matsumoto, Taku</creator><creator>Imabuchi, Takashi</creator><creator>Kawabata, Kuniaki</creator><general>Springer Japan</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20241101</creationdate><title>Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time</title><author>Nakamura, Keita ; Baba, Keita ; Watanobe, Yutaka ; Hanari, Toshihide ; Matsumoto, Taku ; Imabuchi, Takashi ; Kawabata, Kuniaki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-8111bf56469b9614c956880f6c1788bd7f4613e097e6e81dd25e6a74cf6176f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Artificial Intelligence</topic><topic>Computation</topic><topic>Computation by Abstract Devices</topic><topic>Computer Science</topic><topic>Control</topic><topic>High density</topic><topic>Image reconstruction</topic><topic>Mechatronics</topic><topic>Original Article</topic><topic>Parameter estimation</topic><topic>Photogrammetry</topic><topic>Robotics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nakamura, Keita</creatorcontrib><creatorcontrib>Baba, Keita</creatorcontrib><creatorcontrib>Watanobe, Yutaka</creatorcontrib><creatorcontrib>Hanari, Toshihide</creatorcontrib><creatorcontrib>Matsumoto, Taku</creatorcontrib><creatorcontrib>Imabuchi, Takashi</creatorcontrib><creatorcontrib>Kawabata, Kuniaki</creatorcontrib><collection>CrossRef</collection><jtitle>Artificial life and robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nakamura, Keita</au><au>Baba, Keita</au><au>Watanobe, Yutaka</au><au>Hanari, Toshihide</au><au>Matsumoto, Taku</au><au>Imabuchi, Takashi</au><au>Kawabata, Kuniaki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time</atitle><jtitle>Artificial life and robotics</jtitle><stitle>Artif Life Robotics</stitle><date>2024-11-01</date><risdate>2024</risdate><volume>29</volume><issue>4</issue><spage>546</spage><epage>556</epage><pages>546-556</pages><issn>1433-5298</issn><eissn>1614-7456</eissn><abstract>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</abstract><cop>Tokyo</cop><pub>Springer Japan</pub><doi>10.1007/s10015-024-00966-3</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1433-5298
ispartof Artificial life and robotics, 2024-11, Vol.29 (4), p.546-556
issn 1433-5298
1614-7456
language eng
recordid cdi_proquest_journals_3121049812
source SpringerLink Journals
subjects Accuracy
Artificial Intelligence
Computation
Computation by Abstract Devices
Computer Science
Control
High density
Image reconstruction
Mechatronics
Original Article
Parameter estimation
Photogrammetry
Robotics
title Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T06%3A36%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Integration%20of%20multiple%20dense%20point%20clouds%20based%20on%20estimated%20parameters%20in%20photogrammetry%20with%20QR%20code%20for%20reducing%20computation%20time&rft.jtitle=Artificial%20life%20and%20robotics&rft.au=Nakamura,%20Keita&rft.date=2024-11-01&rft.volume=29&rft.issue=4&rft.spage=546&rft.epage=556&rft.pages=546-556&rft.issn=1433-5298&rft.eissn=1614-7456&rft_id=info:doi/10.1007/s10015-024-00966-3&rft_dat=%3Cproquest_cross%3E3121049812%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3121049812&rft_id=info:pmid/&rfr_iscdi=true