Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything
The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has th...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-06 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Luo, Zhaotong Guohang Yan Li, Yikang |
description | The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has the drawback of low transfer-ability. It cannot adapt to dataset variations unless additional training is taken. With the advent of foundation model, this problem can be significantly mitigated. By using the Segment Anything Model(SAM), we propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes. With an initial guess, we opimize the extrinsic parameter by maximizing the consistency of points that are projected inside each image mask. The consistency includes three properties of the point cloud: the intensity, normal vector and categories derived from some segmentation methods. The experiments on different dataset have demonstrated the generality and comparable accuracy of our method. The code is available at https://github.com/OpenCalib/CalibAnything. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2822892777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2822892777</sourcerecordid><originalsourceid>FETCH-proquest_journals_28228927773</originalsourceid><addsrcrecordid>eNqNjL0KwjAYRYMgWLTvEHAO1C_WVLdSKw66-LM4WKLGNqVNNElB394qujtdDvfc20EeUDoi0Righ3xryyAIYMIgDKmHjgmv5InE6ukKqfIZPgijiTNcqhbxSs7jDUl4LQzH6cMZqaw848_IcCe1wmvhCn3Be_v2tyKvhXL49zdA3SuvrPC_2UfDRbpLluRm9L0R1mWlboxqqwwigGgKjDH6n_UC7NVDmw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2822892777</pqid></control><display><type>article</type><title>Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything</title><source>Free E- Journals</source><creator>Luo, Zhaotong ; Guohang Yan ; Li, Yikang</creator><creatorcontrib>Luo, Zhaotong ; Guohang Yan ; Li, Yikang</creatorcontrib><description>The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has the drawback of low transfer-ability. It cannot adapt to dataset variations unless additional training is taken. With the advent of foundation model, this problem can be significantly mitigated. By using the Segment Anything Model(SAM), we propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes. With an initial guess, we opimize the extrinsic parameter by maximizing the consistency of points that are projected inside each image mask. The consistency includes three properties of the point cloud: the intensity, normal vector and categories derived from some segmentation methods. The experiments on different dataset have demonstrated the generality and comparable accuracy of our method. The code is available at https://github.com/OpenCalib/CalibAnything.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Calibration ; Cameras ; Consistency ; Datasets ; Image segmentation ; Lidar ; Training</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Luo, Zhaotong</creatorcontrib><creatorcontrib>Guohang Yan</creatorcontrib><creatorcontrib>Li, Yikang</creatorcontrib><title>Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything</title><title>arXiv.org</title><description>The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has the drawback of low transfer-ability. It cannot adapt to dataset variations unless additional training is taken. With the advent of foundation model, this problem can be significantly mitigated. By using the Segment Anything Model(SAM), we propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes. With an initial guess, we opimize the extrinsic parameter by maximizing the consistency of points that are projected inside each image mask. The consistency includes three properties of the point cloud: the intensity, normal vector and categories derived from some segmentation methods. The experiments on different dataset have demonstrated the generality and comparable accuracy of our method. The code is available at https://github.com/OpenCalib/CalibAnything.</description><subject>Calibration</subject><subject>Cameras</subject><subject>Consistency</subject><subject>Datasets</subject><subject>Image segmentation</subject><subject>Lidar</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjL0KwjAYRYMgWLTvEHAO1C_WVLdSKw66-LM4WKLGNqVNNElB394qujtdDvfc20EeUDoi0Righ3xryyAIYMIgDKmHjgmv5InE6ukKqfIZPgijiTNcqhbxSs7jDUl4LQzH6cMZqaw848_IcCe1wmvhCn3Be_v2tyKvhXL49zdA3SuvrPC_2UfDRbpLluRm9L0R1mWlboxqqwwigGgKjDH6n_UC7NVDmw</recordid><startdate>20230605</startdate><enddate>20230605</enddate><creator>Luo, Zhaotong</creator><creator>Guohang Yan</creator><creator>Li, Yikang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230605</creationdate><title>Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything</title><author>Luo, Zhaotong ; Guohang Yan ; Li, Yikang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28228927773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Calibration</topic><topic>Cameras</topic><topic>Consistency</topic><topic>Datasets</topic><topic>Image segmentation</topic><topic>Lidar</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Luo, Zhaotong</creatorcontrib><creatorcontrib>Guohang Yan</creatorcontrib><creatorcontrib>Li, Yikang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Luo, Zhaotong</au><au>Guohang Yan</au><au>Li, Yikang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything</atitle><jtitle>arXiv.org</jtitle><date>2023-06-05</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The research on extrinsic calibration between Light Detection and Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and generic manner. Since deep learning has been employed in calibration, the restrictions on the scene are greatly reduced. However, data driven method has the drawback of low transfer-ability. It cannot adapt to dataset variations unless additional training is taken. With the advent of foundation model, this problem can be significantly mitigated. By using the Segment Anything Model(SAM), we propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes. With an initial guess, we opimize the extrinsic parameter by maximizing the consistency of points that are projected inside each image mask. The consistency includes three properties of the point cloud: the intensity, normal vector and categories derived from some segmentation methods. The experiments on different dataset have demonstrated the generality and comparable accuracy of our method. The code is available at https://github.com/OpenCalib/CalibAnything.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2822892777 |
source | Free E- Journals |
subjects | Calibration Cameras Consistency Datasets Image segmentation Lidar Training |
title | Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T10%3A03%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Calib-Anything:%20Zero-training%20LiDAR-Camera%20Extrinsic%20Calibration%20Method%20Using%20Segment%20Anything&rft.jtitle=arXiv.org&rft.au=Luo,%20Zhaotong&rft.date=2023-06-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2822892777%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2822892777&rft_id=info:pmid/&rfr_iscdi=true |