A Camera and LiDAR Data Fusion Method for Railway Object Detection

Object detection on railway tracks, which is crucial for train operational safety, face numerous challenges such as multiple types of objects and the complexity of train running environment. In this study, a multi-sensor framework is proposed to fuse camera and LiDAR data for the detection of object...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2021-06, Vol.21 (12), p.13442-13454
Hauptverfasser: Zhangyu, Wang, Guizhen, Yu, Xinkai, Wu, Haoran, Li, Da, Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 13454
container_issue 12
container_start_page 13442
container_title IEEE sensors journal
container_volume 21
creator Zhangyu, Wang
Guizhen, Yu
Xinkai, Wu
Haoran, Li
Da, Li
description Object detection on railway tracks, which is crucial for train operational safety, face numerous challenges such as multiple types of objects and the complexity of train running environment. In this study, a multi-sensor framework is proposed to fuse camera and LiDAR data for the detection of objects on railway track including small obstacles and forward trains. The framework involves a two-stage process: region of interest extraction and object detection. In the first stage, a multi-scale prediction network is designed to achieve pixel level segmentation of the railway track and forward train via the image. In the second stage, LiDAR data is used to estimate the distance to the train and detect small obstacles in the railway track area which is extracted from the first stage. Experimental results show that the region of interest extraction method achieves desirable accuracy for railway track and train segmentation; and the proposed fusion method outperforms the one based on camera or LiDAR alone for small obstacles and forward train detection. Moreover, in practice the proposed framework has been successfully applied on the Hong Kong Metro TSUEN WAN line and the Beijing Metro YANFANG line.
doi_str_mv 10.1109/JSEN.2021.3066714
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2541470237</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9380436</ieee_id><sourcerecordid>2541470237</sourcerecordid><originalsourceid>FETCH-LOGICAL-c341t-77f454f5df4959d27b32e7910012923ccf2cc75df017525c6d784f4a311ded683</originalsourceid><addsrcrecordid>eNo9kNtKAzEQhoMoWKsPIN4EvN6ayWGzuaw9eKBaqArehTQH3NJ2a7JF-vZmqXg1A_P9M8OH0DWQAQBRd89vk9cBJRQGjJSlBH6CeiBEVYDk1WnXM1JwJj_P0UVKK0JASSF76H6IR2bjo8Fm6_CsHg8XeGxag6f7VDdb_OLbr8bh0ES8MPX6xxzwfLnytsVj3-aSmUt0Fsw6-au_2kcf08n76LGYzR-eRsNZYRmHtpAycMGDcIEroRyVS0a9VJBfoYoyawO1VuYxASmosKWTFQ_cMADnXVmxPro97t3F5nvvU6tXzT5u80lNBQcuCWUyU3CkbGxSij7oXaw3Jh40EN2p0p0q3anSf6py5uaYqb33_7xiFeGsZL9TCmIH</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2541470237</pqid></control><display><type>article</type><title>A Camera and LiDAR Data Fusion Method for Railway Object Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Zhangyu, Wang ; Guizhen, Yu ; Xinkai, Wu ; Haoran, Li ; Da, Li</creator><creatorcontrib>Zhangyu, Wang ; Guizhen, Yu ; Xinkai, Wu ; Haoran, Li ; Da, Li</creatorcontrib><description>Object detection on railway tracks, which is crucial for train operational safety, face numerous challenges such as multiple types of objects and the complexity of train running environment. In this study, a multi-sensor framework is proposed to fuse camera and LiDAR data for the detection of objects on railway track including small obstacles and forward trains. The framework involves a two-stage process: region of interest extraction and object detection. In the first stage, a multi-scale prediction network is designed to achieve pixel level segmentation of the railway track and forward train via the image. In the second stage, LiDAR data is used to estimate the distance to the train and detect small obstacles in the railway track area which is extracted from the first stage. Experimental results show that the region of interest extraction method achieves desirable accuracy for railway track and train segmentation; and the proposed fusion method outperforms the one based on camera or LiDAR alone for small obstacles and forward train detection. Moreover, in practice the proposed framework has been successfully applied on the Hong Kong Metro TSUEN WAN line and the Beijing Metro YANFANG line.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2021.3066714</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Barriers ; Cameras ; Data fusion ; Data integration ; Feature extraction ; Image segmentation ; Laser radar ; LiDAR ; Object detection ; Object recognition ; Rail transportation ; railway ; Railway tracks ; Sensors ; Subways ; Trains ; vision</subject><ispartof>IEEE sensors journal, 2021-06, Vol.21 (12), p.13442-13454</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c341t-77f454f5df4959d27b32e7910012923ccf2cc75df017525c6d784f4a311ded683</citedby><cites>FETCH-LOGICAL-c341t-77f454f5df4959d27b32e7910012923ccf2cc75df017525c6d784f4a311ded683</cites><orcidid>0000-0001-9546-7655 ; 0000-0001-8374-7422</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9380436$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9380436$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhangyu, Wang</creatorcontrib><creatorcontrib>Guizhen, Yu</creatorcontrib><creatorcontrib>Xinkai, Wu</creatorcontrib><creatorcontrib>Haoran, Li</creatorcontrib><creatorcontrib>Da, Li</creatorcontrib><title>A Camera and LiDAR Data Fusion Method for Railway Object Detection</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Object detection on railway tracks, which is crucial for train operational safety, face numerous challenges such as multiple types of objects and the complexity of train running environment. In this study, a multi-sensor framework is proposed to fuse camera and LiDAR data for the detection of objects on railway track including small obstacles and forward trains. The framework involves a two-stage process: region of interest extraction and object detection. In the first stage, a multi-scale prediction network is designed to achieve pixel level segmentation of the railway track and forward train via the image. In the second stage, LiDAR data is used to estimate the distance to the train and detect small obstacles in the railway track area which is extracted from the first stage. Experimental results show that the region of interest extraction method achieves desirable accuracy for railway track and train segmentation; and the proposed fusion method outperforms the one based on camera or LiDAR alone for small obstacles and forward train detection. Moreover, in practice the proposed framework has been successfully applied on the Hong Kong Metro TSUEN WAN line and the Beijing Metro YANFANG line.</description><subject>Barriers</subject><subject>Cameras</subject><subject>Data fusion</subject><subject>Data integration</subject><subject>Feature extraction</subject><subject>Image segmentation</subject><subject>Laser radar</subject><subject>LiDAR</subject><subject>Object detection</subject><subject>Object recognition</subject><subject>Rail transportation</subject><subject>railway</subject><subject>Railway tracks</subject><subject>Sensors</subject><subject>Subways</subject><subject>Trains</subject><subject>vision</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kNtKAzEQhoMoWKsPIN4EvN6ayWGzuaw9eKBaqArehTQH3NJ2a7JF-vZmqXg1A_P9M8OH0DWQAQBRd89vk9cBJRQGjJSlBH6CeiBEVYDk1WnXM1JwJj_P0UVKK0JASSF76H6IR2bjo8Fm6_CsHg8XeGxag6f7VDdb_OLbr8bh0ES8MPX6xxzwfLnytsVj3-aSmUt0Fsw6-au_2kcf08n76LGYzR-eRsNZYRmHtpAycMGDcIEroRyVS0a9VJBfoYoyawO1VuYxASmosKWTFQ_cMADnXVmxPro97t3F5nvvU6tXzT5u80lNBQcuCWUyU3CkbGxSij7oXaw3Jh40EN2p0p0q3anSf6py5uaYqb33_7xiFeGsZL9TCmIH</recordid><startdate>20210615</startdate><enddate>20210615</enddate><creator>Zhangyu, Wang</creator><creator>Guizhen, Yu</creator><creator>Xinkai, Wu</creator><creator>Haoran, Li</creator><creator>Da, Li</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-9546-7655</orcidid><orcidid>https://orcid.org/0000-0001-8374-7422</orcidid></search><sort><creationdate>20210615</creationdate><title>A Camera and LiDAR Data Fusion Method for Railway Object Detection</title><author>Zhangyu, Wang ; Guizhen, Yu ; Xinkai, Wu ; Haoran, Li ; Da, Li</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c341t-77f454f5df4959d27b32e7910012923ccf2cc75df017525c6d784f4a311ded683</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Barriers</topic><topic>Cameras</topic><topic>Data fusion</topic><topic>Data integration</topic><topic>Feature extraction</topic><topic>Image segmentation</topic><topic>Laser radar</topic><topic>LiDAR</topic><topic>Object detection</topic><topic>Object recognition</topic><topic>Rail transportation</topic><topic>railway</topic><topic>Railway tracks</topic><topic>Sensors</topic><topic>Subways</topic><topic>Trains</topic><topic>vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhangyu, Wang</creatorcontrib><creatorcontrib>Guizhen, Yu</creatorcontrib><creatorcontrib>Xinkai, Wu</creatorcontrib><creatorcontrib>Haoran, Li</creatorcontrib><creatorcontrib>Da, Li</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhangyu, Wang</au><au>Guizhen, Yu</au><au>Xinkai, Wu</au><au>Haoran, Li</au><au>Da, Li</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Camera and LiDAR Data Fusion Method for Railway Object Detection</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2021-06-15</date><risdate>2021</risdate><volume>21</volume><issue>12</issue><spage>13442</spage><epage>13454</epage><pages>13442-13454</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>Object detection on railway tracks, which is crucial for train operational safety, face numerous challenges such as multiple types of objects and the complexity of train running environment. In this study, a multi-sensor framework is proposed to fuse camera and LiDAR data for the detection of objects on railway track including small obstacles and forward trains. The framework involves a two-stage process: region of interest extraction and object detection. In the first stage, a multi-scale prediction network is designed to achieve pixel level segmentation of the railway track and forward train via the image. In the second stage, LiDAR data is used to estimate the distance to the train and detect small obstacles in the railway track area which is extracted from the first stage. Experimental results show that the region of interest extraction method achieves desirable accuracy for railway track and train segmentation; and the proposed fusion method outperforms the one based on camera or LiDAR alone for small obstacles and forward train detection. Moreover, in practice the proposed framework has been successfully applied on the Hong Kong Metro TSUEN WAN line and the Beijing Metro YANFANG line.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2021.3066714</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-9546-7655</orcidid><orcidid>https://orcid.org/0000-0001-8374-7422</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1530-437X
ispartof IEEE sensors journal, 2021-06, Vol.21 (12), p.13442-13454
issn 1530-437X
1558-1748
language eng
recordid cdi_proquest_journals_2541470237
source IEEE Electronic Library (IEL)
subjects Barriers
Cameras
Data fusion
Data integration
Feature extraction
Image segmentation
Laser radar
LiDAR
Object detection
Object recognition
Rail transportation
railway
Railway tracks
Sensors
Subways
Trains
vision
title A Camera and LiDAR Data Fusion Method for Railway Object Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T15%3A44%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Camera%20and%20LiDAR%20Data%20Fusion%20Method%20for%20Railway%20Object%20Detection&rft.jtitle=IEEE%20sensors%20journal&rft.au=Zhangyu,%20Wang&rft.date=2021-06-15&rft.volume=21&rft.issue=12&rft.spage=13442&rft.epage=13454&rft.pages=13442-13454&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2021.3066714&rft_dat=%3Cproquest_RIE%3E2541470237%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2541470237&rft_id=info:pmid/&rft_ieee_id=9380436&rfr_iscdi=true