Multisensor Data Fusion for Reliable Obstacle Avoidance

In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Canh, Thanh Nguyen, Nguyen, Truong Son, Quach, Cong Hoang, HoangVan, Xiem, Phung, Manh Duong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Canh, Thanh Nguyen
Nguyen, Truong Son
Quach, Cong Hoang
HoangVan, Xiem
Phung, Manh Duong
description In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.
doi_str_mv 10.48550/arxiv.2212.13218
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2212_13218</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2212_13218</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-d1141669e20546a7ee93e8a35b2ea534c233aee651c99f2f4434683d53e5b21c3</originalsourceid><addsrcrecordid>eNotj82KwkAQhOfiYdF9gD2ZF0hMT89MJkfxZxVchMV76Ew6MBATyUTRt9-seqoqKIr6hPiCNFFW63RB_d3fEilBJoAS7IfIfq7N4AO3oeujNQ0Uba_Bd21Uj_mXG09lw9GxDAO50Sxvna-odTwTk5qawJ9vnYrTdnNa7eLD8Xu_Wh5iMpmNKwAFxuQsU60MZcw5siXUpWTSqJxEJGajweV5LWulUBmLlUYeK-BwKuav2efz4tL7M_WP4p-geBLgH_TxP5A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multisensor Data Fusion for Reliable Obstacle Avoidance</title><source>arXiv.org</source><creator>Canh, Thanh Nguyen ; Nguyen, Truong Son ; Quach, Cong Hoang ; HoangVan, Xiem ; Phung, Manh Duong</creator><creatorcontrib>Canh, Thanh Nguyen ; Nguyen, Truong Son ; Quach, Cong Hoang ; HoangVan, Xiem ; Phung, Manh Duong</creatorcontrib><description>In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.</description><identifier>DOI: 10.48550/arxiv.2212.13218</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2022-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2212.13218$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2212.13218$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Canh, Thanh Nguyen</creatorcontrib><creatorcontrib>Nguyen, Truong Son</creatorcontrib><creatorcontrib>Quach, Cong Hoang</creatorcontrib><creatorcontrib>HoangVan, Xiem</creatorcontrib><creatorcontrib>Phung, Manh Duong</creatorcontrib><title>Multisensor Data Fusion for Reliable Obstacle Avoidance</title><description>In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj82KwkAQhOfiYdF9gD2ZF0hMT89MJkfxZxVchMV76Ew6MBATyUTRt9-seqoqKIr6hPiCNFFW63RB_d3fEilBJoAS7IfIfq7N4AO3oeujNQ0Uba_Bd21Uj_mXG09lw9GxDAO50Sxvna-odTwTk5qawJ9vnYrTdnNa7eLD8Xu_Wh5iMpmNKwAFxuQsU60MZcw5siXUpWTSqJxEJGajweV5LWulUBmLlUYeK-BwKuav2efz4tL7M_WP4p-geBLgH_TxP5A</recordid><startdate>20221226</startdate><enddate>20221226</enddate><creator>Canh, Thanh Nguyen</creator><creator>Nguyen, Truong Son</creator><creator>Quach, Cong Hoang</creator><creator>HoangVan, Xiem</creator><creator>Phung, Manh Duong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221226</creationdate><title>Multisensor Data Fusion for Reliable Obstacle Avoidance</title><author>Canh, Thanh Nguyen ; Nguyen, Truong Son ; Quach, Cong Hoang ; HoangVan, Xiem ; Phung, Manh Duong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-d1141669e20546a7ee93e8a35b2ea534c233aee651c99f2f4434683d53e5b21c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Canh, Thanh Nguyen</creatorcontrib><creatorcontrib>Nguyen, Truong Son</creatorcontrib><creatorcontrib>Quach, Cong Hoang</creatorcontrib><creatorcontrib>HoangVan, Xiem</creatorcontrib><creatorcontrib>Phung, Manh Duong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Canh, Thanh Nguyen</au><au>Nguyen, Truong Son</au><au>Quach, Cong Hoang</au><au>HoangVan, Xiem</au><au>Phung, Manh Duong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multisensor Data Fusion for Reliable Obstacle Avoidance</atitle><date>2022-12-26</date><risdate>2022</risdate><abstract>In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.</abstract><doi>10.48550/arxiv.2212.13218</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2212.13218
ispartof
issn
language eng
recordid cdi_arxiv_primary_2212_13218
source arXiv.org
subjects Computer Science - Robotics
title Multisensor Data Fusion for Reliable Obstacle Avoidance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T06%3A23%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multisensor%20Data%20Fusion%20for%20Reliable%20Obstacle%20Avoidance&rft.au=Canh,%20Thanh%20Nguyen&rft.date=2022-12-26&rft_id=info:doi/10.48550/arxiv.2212.13218&rft_dat=%3Carxiv_GOX%3E2212_13218%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true