Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network
Recently, self-driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self-driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self-driving cars more. Later on, using these introduced design mo...
Gespeichert in:
Veröffentlicht in: | Journal of sensors 2021, Vol.2021 (1) |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | |
container_title | Journal of sensors |
container_volume | 2021 |
creator | Fan, Yu-Cheng Yelamandala, Chitra Meghala Chen, Ting-Wei Huang, Chun-Ju |
description | Recently, self-driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self-driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self-driving cars more. Later on, using these introduced design models, a lot of companies started to design self-driving cars. Various sensors, such as radar, high-resolution cameras, and LiDAR are important in self-driving cars to sense the surroundings. LiDAR acts as an eye of a self-driving vehicle, by offering 64 scanning channels, 26.9° vertical field view, and a high-precision 360° horizontal field view in real-time. The LiDAR sensor can provide 360° environmental depth information with a detection range of up to 120 meters. In addition, the left and right cameras can further assist in obtaining front image information. In this way, the surrounding environment model of the self-driving car can be accurately obtained, which is convenient for the self-driving algorithm to perform route planning. It is very important for self-driving to avoid the collision. LiDAR provides both horizontal and vertical field views and helps in avoiding collision. In an online website, the dataset provides different kinds of data like point cloud data and color images which helps this data to use for object recognition. In this paper, we used two types of publicly available datasets, namely, KITTI and PASCAL VOC. Firstly, the KITTI dataset provides in-depth data knowledge for the LiDAR segmentation (LS) of objects obtained through LiDAR point clouds. The performance of object segmentation through LiDAR cloud points is used to find the region of interest (ROI) on images. And later on, we trained the network with the PASCAL VOC dataset used for object detection by the YOLOv4 neural network. To evaluate, we used the region of interest image as input to YOLOv4. By using all these technologies, we can segment and detect objects. Our algorithm ultimately constructs a LiDAR point cloud at the same time; it also detects the image in real-time. |
doi_str_mv | 10.1155/2021/5576262 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2537373999</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2537373999</sourcerecordid><originalsourceid>FETCH-LOGICAL-c471t-6cf550ad9e56ff201f991ef9d21c47f7ba40ff88c6294336fb73817f661c48d3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKs3f0DAo8bmY5NsjrVVKywu1B70FNLdDG5tuzXZWvz3plQ8enqGmYcZ5kXoktFbxqQccMrZQEqtuOJHqMdUronmKj_-q-XrKTqLcUGpElqIHppMvVuSWbPyuJwvfNXhse8SmnaNoQ24aMbDKb5z0dc4tYoXMiVvZVF-ZfjZb4NbJnS7NnycoxNwy-gvftlHs4f72WhCivLxaTQsSJVp1hFVgZTU1cZLBcApA2OYB1NzlgTQc5dRgDyvFDeZEArmWuRMg1Jpnteij64Oazeh_dz62NlFuw3rdNFymV7SwhiTrJuDVYU2xuDBbkKzcuHbMmr3Udl9VPY3qqRfH_T3Zl27XfO__QMbZWV0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2537373999</pqid></control><display><type>article</type><title>Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network</title><source>Wiley Online Library Open Access</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Alma/SFX Local Collection</source><creator>Fan, Yu-Cheng ; Yelamandala, Chitra Meghala ; Chen, Ting-Wei ; Huang, Chun-Ju</creator><contributor>Butun, Ismail ; Ismail Butun</contributor><creatorcontrib>Fan, Yu-Cheng ; Yelamandala, Chitra Meghala ; Chen, Ting-Wei ; Huang, Chun-Ju ; Butun, Ismail ; Ismail Butun</creatorcontrib><description>Recently, self-driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self-driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self-driving cars more. Later on, using these introduced design models, a lot of companies started to design self-driving cars. Various sensors, such as radar, high-resolution cameras, and LiDAR are important in self-driving cars to sense the surroundings. LiDAR acts as an eye of a self-driving vehicle, by offering 64 scanning channels, 26.9° vertical field view, and a high-precision 360° horizontal field view in real-time. The LiDAR sensor can provide 360° environmental depth information with a detection range of up to 120 meters. In addition, the left and right cameras can further assist in obtaining front image information. In this way, the surrounding environment model of the self-driving car can be accurately obtained, which is convenient for the self-driving algorithm to perform route planning. It is very important for self-driving to avoid the collision. LiDAR provides both horizontal and vertical field views and helps in avoiding collision. In an online website, the dataset provides different kinds of data like point cloud data and color images which helps this data to use for object recognition. In this paper, we used two types of publicly available datasets, namely, KITTI and PASCAL VOC. Firstly, the KITTI dataset provides in-depth data knowledge for the LiDAR segmentation (LS) of objects obtained through LiDAR point clouds. The performance of object segmentation through LiDAR cloud points is used to find the region of interest (ROI) on images. And later on, we trained the network with the PASCAL VOC dataset used for object detection by the YOLOv4 neural network. To evaluate, we used the region of interest image as input to YOLOv4. By using all these technologies, we can segment and detect objects. Our algorithm ultimately constructs a LiDAR point cloud at the same time; it also detects the image in real-time.</description><identifier>ISSN: 1687-725X</identifier><identifier>EISSN: 1687-7268</identifier><identifier>DOI: 10.1155/2021/5576262</identifier><language>eng</language><publisher>New York: Hindawi</publisher><subject>Algorithms ; Artificial intelligence ; Automobiles ; Autonomous cars ; Autonomous vehicles ; Cameras ; Collision avoidance ; Color imagery ; Computer peripherals ; Cultural heritage ; Datasets ; Deep learning ; Environment models ; Image segmentation ; Lidar ; Neural networks ; Object recognition ; Real time ; Route planning ; Sensors ; Websites</subject><ispartof>Journal of sensors, 2021, Vol.2021 (1)</ispartof><rights>Copyright © 2021 Yu-Cheng Fan et al.</rights><rights>Copyright © 2021 Yu-Cheng Fan et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c471t-6cf550ad9e56ff201f991ef9d21c47f7ba40ff88c6294336fb73817f661c48d3</citedby><cites>FETCH-LOGICAL-c471t-6cf550ad9e56ff201f991ef9d21c47f7ba40ff88c6294336fb73817f661c48d3</cites><orcidid>0000-0002-7482-7153 ; 0000-0002-9768-7798 ; 0000-0002-9599-6415 ; 0000-0001-7618-1678</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,4010,27900,27901,27902</link.rule.ids></links><search><contributor>Butun, Ismail</contributor><contributor>Ismail Butun</contributor><creatorcontrib>Fan, Yu-Cheng</creatorcontrib><creatorcontrib>Yelamandala, Chitra Meghala</creatorcontrib><creatorcontrib>Chen, Ting-Wei</creatorcontrib><creatorcontrib>Huang, Chun-Ju</creatorcontrib><title>Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network</title><title>Journal of sensors</title><description>Recently, self-driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self-driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self-driving cars more. Later on, using these introduced design models, a lot of companies started to design self-driving cars. Various sensors, such as radar, high-resolution cameras, and LiDAR are important in self-driving cars to sense the surroundings. LiDAR acts as an eye of a self-driving vehicle, by offering 64 scanning channels, 26.9° vertical field view, and a high-precision 360° horizontal field view in real-time. The LiDAR sensor can provide 360° environmental depth information with a detection range of up to 120 meters. In addition, the left and right cameras can further assist in obtaining front image information. In this way, the surrounding environment model of the self-driving car can be accurately obtained, which is convenient for the self-driving algorithm to perform route planning. It is very important for self-driving to avoid the collision. LiDAR provides both horizontal and vertical field views and helps in avoiding collision. In an online website, the dataset provides different kinds of data like point cloud data and color images which helps this data to use for object recognition. In this paper, we used two types of publicly available datasets, namely, KITTI and PASCAL VOC. Firstly, the KITTI dataset provides in-depth data knowledge for the LiDAR segmentation (LS) of objects obtained through LiDAR point clouds. The performance of object segmentation through LiDAR cloud points is used to find the region of interest (ROI) on images. And later on, we trained the network with the PASCAL VOC dataset used for object detection by the YOLOv4 neural network. To evaluate, we used the region of interest image as input to YOLOv4. By using all these technologies, we can segment and detect objects. Our algorithm ultimately constructs a LiDAR point cloud at the same time; it also detects the image in real-time.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Automobiles</subject><subject>Autonomous cars</subject><subject>Autonomous vehicles</subject><subject>Cameras</subject><subject>Collision avoidance</subject><subject>Color imagery</subject><subject>Computer peripherals</subject><subject>Cultural heritage</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Environment models</subject><subject>Image segmentation</subject><subject>Lidar</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Real time</subject><subject>Route planning</subject><subject>Sensors</subject><subject>Websites</subject><issn>1687-725X</issn><issn>1687-7268</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>BENPR</sourceid><recordid>eNp9kE1LAzEQhoMoWKs3f0DAo8bmY5NsjrVVKywu1B70FNLdDG5tuzXZWvz3plQ8enqGmYcZ5kXoktFbxqQccMrZQEqtuOJHqMdUronmKj_-q-XrKTqLcUGpElqIHppMvVuSWbPyuJwvfNXhse8SmnaNoQ24aMbDKb5z0dc4tYoXMiVvZVF-ZfjZb4NbJnS7NnycoxNwy-gvftlHs4f72WhCivLxaTQsSJVp1hFVgZTU1cZLBcApA2OYB1NzlgTQc5dRgDyvFDeZEArmWuRMg1Jpnteij64Oazeh_dz62NlFuw3rdNFymV7SwhiTrJuDVYU2xuDBbkKzcuHbMmr3Udl9VPY3qqRfH_T3Zl27XfO__QMbZWV0</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Fan, Yu-Cheng</creator><creator>Yelamandala, Chitra Meghala</creator><creator>Chen, Ting-Wei</creator><creator>Huang, Chun-Ju</creator><general>Hindawi</general><general>Hindawi Limited</general><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SP</scope><scope>7U5</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>D1I</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KB.</scope><scope>L6V</scope><scope>L7M</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-7482-7153</orcidid><orcidid>https://orcid.org/0000-0002-9768-7798</orcidid><orcidid>https://orcid.org/0000-0002-9599-6415</orcidid><orcidid>https://orcid.org/0000-0001-7618-1678</orcidid></search><sort><creationdate>2021</creationdate><title>Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network</title><author>Fan, Yu-Cheng ; Yelamandala, Chitra Meghala ; Chen, Ting-Wei ; Huang, Chun-Ju</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c471t-6cf550ad9e56ff201f991ef9d21c47f7ba40ff88c6294336fb73817f661c48d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Automobiles</topic><topic>Autonomous cars</topic><topic>Autonomous vehicles</topic><topic>Cameras</topic><topic>Collision avoidance</topic><topic>Color imagery</topic><topic>Computer peripherals</topic><topic>Cultural heritage</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Environment models</topic><topic>Image segmentation</topic><topic>Lidar</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Real time</topic><topic>Route planning</topic><topic>Sensors</topic><topic>Websites</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fan, Yu-Cheng</creatorcontrib><creatorcontrib>Yelamandala, Chitra Meghala</creatorcontrib><creatorcontrib>Chen, Ting-Wei</creatorcontrib><creatorcontrib>Huang, Chun-Ju</creatorcontrib><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East & Africa Database</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Materials Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Materials Science Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of sensors</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fan, Yu-Cheng</au><au>Yelamandala, Chitra Meghala</au><au>Chen, Ting-Wei</au><au>Huang, Chun-Ju</au><au>Butun, Ismail</au><au>Ismail Butun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network</atitle><jtitle>Journal of sensors</jtitle><date>2021</date><risdate>2021</risdate><volume>2021</volume><issue>1</issue><issn>1687-725X</issn><eissn>1687-7268</eissn><abstract>Recently, self-driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self-driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self-driving cars more. Later on, using these introduced design models, a lot of companies started to design self-driving cars. Various sensors, such as radar, high-resolution cameras, and LiDAR are important in self-driving cars to sense the surroundings. LiDAR acts as an eye of a self-driving vehicle, by offering 64 scanning channels, 26.9° vertical field view, and a high-precision 360° horizontal field view in real-time. The LiDAR sensor can provide 360° environmental depth information with a detection range of up to 120 meters. In addition, the left and right cameras can further assist in obtaining front image information. In this way, the surrounding environment model of the self-driving car can be accurately obtained, which is convenient for the self-driving algorithm to perform route planning. It is very important for self-driving to avoid the collision. LiDAR provides both horizontal and vertical field views and helps in avoiding collision. In an online website, the dataset provides different kinds of data like point cloud data and color images which helps this data to use for object recognition. In this paper, we used two types of publicly available datasets, namely, KITTI and PASCAL VOC. Firstly, the KITTI dataset provides in-depth data knowledge for the LiDAR segmentation (LS) of objects obtained through LiDAR point clouds. The performance of object segmentation through LiDAR cloud points is used to find the region of interest (ROI) on images. And later on, we trained the network with the PASCAL VOC dataset used for object detection by the YOLOv4 neural network. To evaluate, we used the region of interest image as input to YOLOv4. By using all these technologies, we can segment and detect objects. Our algorithm ultimately constructs a LiDAR point cloud at the same time; it also detects the image in real-time.</abstract><cop>New York</cop><pub>Hindawi</pub><doi>10.1155/2021/5576262</doi><orcidid>https://orcid.org/0000-0002-7482-7153</orcidid><orcidid>https://orcid.org/0000-0002-9768-7798</orcidid><orcidid>https://orcid.org/0000-0002-9599-6415</orcidid><orcidid>https://orcid.org/0000-0001-7618-1678</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1687-725X |
ispartof | Journal of sensors, 2021, Vol.2021 (1) |
issn | 1687-725X 1687-7268 |
language | eng |
recordid | cdi_proquest_journals_2537373999 |
source | Wiley Online Library Open Access; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Alma/SFX Local Collection |
subjects | Algorithms Artificial intelligence Automobiles Autonomous cars Autonomous vehicles Cameras Collision avoidance Color imagery Computer peripherals Cultural heritage Datasets Deep learning Environment models Image segmentation Lidar Neural networks Object recognition Real time Route planning Sensors Websites |
title | Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T15%3A47%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Real-Time%20Object%20Detection%20for%20LiDAR%20Based%20on%20LS-R-YOLOv4%20Neural%20Network&rft.jtitle=Journal%20of%20sensors&rft.au=Fan,%20Yu-Cheng&rft.date=2021&rft.volume=2021&rft.issue=1&rft.issn=1687-725X&rft.eissn=1687-7268&rft_id=info:doi/10.1155/2021/5576262&rft_dat=%3Cproquest_cross%3E2537373999%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2537373999&rft_id=info:pmid/&rfr_iscdi=true |