Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization
Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of ex...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-12 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on instrumentation and measurement |
container_volume | 71 |
creator | Liu, Xiyuan Yuan, Chongjian Zhang, Fu |
description | Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360° mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. A few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs, and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of k -d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR bundle adjustment (BA) problem. By deriving the cost function up to second order, the solving time and precision of the nonlinear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration (averaged result from 100 independent trials) and 1.5 times for LiDAR-camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate. To benefit the robotics community, we have also open-sourced our implementation code on GitHub. |
doi_str_mv | 10.1109/TIM.2022.3176889 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9779777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9779777</ieee_id><sourcerecordid>2674081470</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-44a4a1f21452c6624f2aa49b9676c3f2fa6e1c63b3587ce2c5fcabc45dd38b593</originalsourceid><addsrcrecordid>eNo9kE1LAzEURYMoWKt7wU3A9dQkk4_JstRWCy2Ctt2GTJqUlHRmTKZS_fVObREevM259z0OAPcYDTBG8mkxnQ8IImSQY8GLQl6AHmZMZJJzcgl6COEik5Txa3CT0hYhJDgVPWAXOm5sG2xKcHxoo6-SN3Ckgy-jbn1dwdrB-T60vgkWfux0CHBSr-DMPw_fE9TVuoN3NuoEl8lXGzhc66b1Xxau6oMN_uev5BZcOR2SvTvvPlhOxovRazZ7e5mOhrPMEInbjFJNNXYEU0ZM9zd1RGsqS8kFN7kjTnOLDc_LnBXCWGKYM7o0lK3XeVEymffB46m3ifXn3qZWbet9rLqTinBBUYGpQB2FTpSJdUrROtVEv9PxW2GkjjJVJ1MdZaqzzC7ycIp4a-0_LoXoRuS_ov9wtg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2674081470</pqid></control><display><type>article</type><title>Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Xiyuan ; Yuan, Chongjian ; Zhang, Fu</creator><creatorcontrib>Liu, Xiyuan ; Yuan, Chongjian ; Zhang, Fu</creatorcontrib><description>Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360° mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. A few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs, and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR bundle adjustment (BA) problem. By deriving the cost function up to second order, the solving time and precision of the nonlinear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration (averaged result from 100 independent trials) and 1.5 times for LiDAR-camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate. To benefit the robotics community, we have also open-sourced our implementation code on GitHub.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2022.3176889</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Bundle adjustment ; Calibration ; Cameras ; Cost function ; Feature extraction ; Field of view ; High-resolution mapping ; Laser radar ; Lidar ; multiple light detection and ranging (LiDAR)–camera extrinsic calibration ; Point cloud compression ; Robot vision systems ; Robotics ; Sensors ; small field-of-view (FoV) LiDAR ; Solid state</subject><ispartof>IEEE transactions on instrumentation and measurement, 2022, Vol.71, p.1-12</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-44a4a1f21452c6624f2aa49b9676c3f2fa6e1c63b3587ce2c5fcabc45dd38b593</citedby><cites>FETCH-LOGICAL-c291t-44a4a1f21452c6624f2aa49b9676c3f2fa6e1c63b3587ce2c5fcabc45dd38b593</cites><orcidid>0000-0002-3741-5993 ; 0000-0002-8636-4168 ; 0000-0002-4925-1498</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9779777$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4009,27902,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9779777$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Xiyuan</creatorcontrib><creatorcontrib>Yuan, Chongjian</creatorcontrib><creatorcontrib>Zhang, Fu</creatorcontrib><title>Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360° mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. A few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs, and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR bundle adjustment (BA) problem. By deriving the cost function up to second order, the solving time and precision of the nonlinear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration (averaged result from 100 independent trials) and 1.5 times for LiDAR-camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate. To benefit the robotics community, we have also open-sourced our implementation code on GitHub.</description><subject>Bundle adjustment</subject><subject>Calibration</subject><subject>Cameras</subject><subject>Cost function</subject><subject>Feature extraction</subject><subject>Field of view</subject><subject>High-resolution mapping</subject><subject>Laser radar</subject><subject>Lidar</subject><subject>multiple light detection and ranging (LiDAR)–camera extrinsic calibration</subject><subject>Point cloud compression</subject><subject>Robot vision systems</subject><subject>Robotics</subject><subject>Sensors</subject><subject>small field-of-view (FoV) LiDAR</subject><subject>Solid state</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1LAzEURYMoWKt7wU3A9dQkk4_JstRWCy2Ctt2GTJqUlHRmTKZS_fVObREevM259z0OAPcYDTBG8mkxnQ8IImSQY8GLQl6AHmZMZJJzcgl6COEik5Txa3CT0hYhJDgVPWAXOm5sG2xKcHxoo6-SN3Ckgy-jbn1dwdrB-T60vgkWfux0CHBSr-DMPw_fE9TVuoN3NuoEl8lXGzhc66b1Xxau6oMN_uev5BZcOR2SvTvvPlhOxovRazZ7e5mOhrPMEInbjFJNNXYEU0ZM9zd1RGsqS8kFN7kjTnOLDc_LnBXCWGKYM7o0lK3XeVEymffB46m3ifXn3qZWbet9rLqTinBBUYGpQB2FTpSJdUrROtVEv9PxW2GkjjJVJ1MdZaqzzC7ycIp4a-0_LoXoRuS_ov9wtg</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Liu, Xiyuan</creator><creator>Yuan, Chongjian</creator><creator>Zhang, Fu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-3741-5993</orcidid><orcidid>https://orcid.org/0000-0002-8636-4168</orcidid><orcidid>https://orcid.org/0000-0002-4925-1498</orcidid></search><sort><creationdate>2022</creationdate><title>Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization</title><author>Liu, Xiyuan ; Yuan, Chongjian ; Zhang, Fu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-44a4a1f21452c6624f2aa49b9676c3f2fa6e1c63b3587ce2c5fcabc45dd38b593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Bundle adjustment</topic><topic>Calibration</topic><topic>Cameras</topic><topic>Cost function</topic><topic>Feature extraction</topic><topic>Field of view</topic><topic>High-resolution mapping</topic><topic>Laser radar</topic><topic>Lidar</topic><topic>multiple light detection and ranging (LiDAR)–camera extrinsic calibration</topic><topic>Point cloud compression</topic><topic>Robot vision systems</topic><topic>Robotics</topic><topic>Sensors</topic><topic>small field-of-view (FoV) LiDAR</topic><topic>Solid state</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Xiyuan</creatorcontrib><creatorcontrib>Yuan, Chongjian</creatorcontrib><creatorcontrib>Zhang, Fu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Xiyuan</au><au>Yuan, Chongjian</au><au>Zhang, Fu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2022</date><risdate>2022</risdate><volume>71</volume><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360° mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. A few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs, and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR bundle adjustment (BA) problem. By deriving the cost function up to second order, the solving time and precision of the nonlinear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration (averaged result from 100 independent trials) and 1.5 times for LiDAR-camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate. To benefit the robotics community, we have also open-sourced our implementation code on GitHub.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2022.3176889</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3741-5993</orcidid><orcidid>https://orcid.org/0000-0002-8636-4168</orcidid><orcidid>https://orcid.org/0000-0002-4925-1498</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9456 |
ispartof | IEEE transactions on instrumentation and measurement, 2022, Vol.71, p.1-12 |
issn | 0018-9456 1557-9662 |
language | eng |
recordid | cdi_ieee_primary_9779777 |
source | IEEE Electronic Library (IEL) |
subjects | Bundle adjustment Calibration Cameras Cost function Feature extraction Field of view High-resolution mapping Laser radar Lidar multiple light detection and ranging (LiDAR)–camera extrinsic calibration Point cloud compression Robot vision systems Robotics Sensors small field-of-view (FoV) LiDAR Solid state |
title | Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T09%3A53%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Targetless%20Extrinsic%20Calibration%20of%20Multiple%20Small%20FoV%20LiDARs%20and%20Cameras%20Using%20Adaptive%20Voxelization&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Liu,%20Xiyuan&rft.date=2022&rft.volume=71&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2022.3176889&rft_dat=%3Cproquest_RIE%3E2674081470%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2674081470&rft_id=info:pmid/&rft_ieee_id=9779777&rfr_iscdi=true |