Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles
Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a clutt...
Gespeichert in:
Veröffentlicht in: | Journal of navigation 2022-03, Vol.75 (2), p.437-454 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 454 |
---|---|
container_issue | 2 |
container_start_page | 437 |
container_title | Journal of navigation |
container_volume | 75 |
creator | Shi, Binghua Su, Yixin Lian, Cheng Xiong, Chang Long, Yang Gong, Chenglong |
description | Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a cluttered environment. This work proposes a novel obstacle type recognition approach that combines a dilated operator with the deep-level features map of ResNet50 for autonomous navigation. First, visual images are collected and annotated from various different scenarios for USV test navigation. Second, the deep learning model, based on a dilated convolutional neural network, is set and trained. Dilated convolution allows the whole network to learn deep features with increased receptive field and further improves the performance of obstacle type recognition. Third, a series of evaluation parameters are utilised to evaluate the obtained model, such as the mean average precision (mAP), missing rate and detection speed. Finally, some experiments are designed to verify the accuracy of the proposed approach using visual images in a cluttered environment. Experimental results demonstrate that the dilated convolutional neural network obtains better recognition performance than the other methods, with an mAP of 88%. |
doi_str_mv | 10.1017/S0373463321000941 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2652014274</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><cupid>10_1017_S0373463321000941</cupid><sourcerecordid>2652014274</sourcerecordid><originalsourceid>FETCH-LOGICAL-c317t-53a7424410bb986f8b17e3ab87706de80c378198394ce7790af5fbcb6207e7313</originalsourceid><addsrcrecordid>eNp1kM1LwzAYxoMoOKd_gLeA52rSpE16lOEXDHZQzyVJ387MLtmSdrL_3tQNPIinh5fn97xfCF1TcksJFXevhAnGS8ZySgipOD1BE8rLKhNCFqdoMtrZ6J-jixhXiZFcFhO0XejYK9MB7vcbwAGMXzrbW--wdXhn46A6bNdqCTFVCje2Uz002Hi3890wgglwMIQf6b98-MStD3hwa-VcIuMQWmUA7-DDpjnxEp21qotwddQpen98eJs9Z_PF08vsfp4ZRkWfFUwJnnNOidaVLFupqQCmtBSClA1IYpiQtJKs4gaEqIhqi1YbXeZEgGCUTdHNoe8m-O0Asa9Xfghp21jnZZETynPBE0UPlAk-xgBtvQnp3LCvKanHz9Z_Ppsy7JhRax1ss4Tf1v-nvgEGxnvh</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2652014274</pqid></control><display><type>article</type><title>Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles</title><source>Cambridge University Press Journals Complete</source><creator>Shi, Binghua ; Su, Yixin ; Lian, Cheng ; Xiong, Chang ; Long, Yang ; Gong, Chenglong</creator><creatorcontrib>Shi, Binghua ; Su, Yixin ; Lian, Cheng ; Xiong, Chang ; Long, Yang ; Gong, Chenglong</creatorcontrib><description>Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a cluttered environment. This work proposes a novel obstacle type recognition approach that combines a dilated operator with the deep-level features map of ResNet50 for autonomous navigation. First, visual images are collected and annotated from various different scenarios for USV test navigation. Second, the deep learning model, based on a dilated convolutional neural network, is set and trained. Dilated convolution allows the whole network to learn deep features with increased receptive field and further improves the performance of obstacle type recognition. Third, a series of evaluation parameters are utilised to evaluate the obtained model, such as the mean average precision (mAP), missing rate and detection speed. Finally, some experiments are designed to verify the accuracy of the proposed approach using visual images in a cluttered environment. Experimental results demonstrate that the dilated convolutional neural network obtains better recognition performance than the other methods, with an mAP of 88%.</description><identifier>ISSN: 0373-4633</identifier><identifier>EISSN: 1469-7785</identifier><identifier>DOI: 10.1017/S0373463321000941</identifier><language>eng</language><publisher>Cambridge, UK: Cambridge University Press</publisher><subject>Accuracy ; Artificial neural networks ; Autonomous navigation ; Convolution ; Deep learning ; Detection ; Efficiency ; Methods ; Navigation ; Neural networks ; Object recognition ; Obstacle avoidance ; Path planning ; Performance enhancement ; Sensors ; Surface vehicles ; Surveillance ; Unmanned vehicles ; Vehicles</subject><ispartof>Journal of navigation, 2022-03, Vol.75 (2), p.437-454</ispartof><rights>Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of The Royal Institute of Navigation</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c317t-53a7424410bb986f8b17e3ab87706de80c378198394ce7790af5fbcb6207e7313</citedby><cites>FETCH-LOGICAL-c317t-53a7424410bb986f8b17e3ab87706de80c378198394ce7790af5fbcb6207e7313</cites><orcidid>0000-0002-8554-0350 ; 0000-0003-4469-5759</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.cambridge.org/core/product/identifier/S0373463321000941/type/journal_article$$EHTML$$P50$$Gcambridge$$H</linktohtml><link.rule.ids>164,314,780,784,27924,27925,55628</link.rule.ids></links><search><creatorcontrib>Shi, Binghua</creatorcontrib><creatorcontrib>Su, Yixin</creatorcontrib><creatorcontrib>Lian, Cheng</creatorcontrib><creatorcontrib>Xiong, Chang</creatorcontrib><creatorcontrib>Long, Yang</creatorcontrib><creatorcontrib>Gong, Chenglong</creatorcontrib><title>Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles</title><title>Journal of navigation</title><addtitle>J. Navigation</addtitle><description>Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a cluttered environment. This work proposes a novel obstacle type recognition approach that combines a dilated operator with the deep-level features map of ResNet50 for autonomous navigation. First, visual images are collected and annotated from various different scenarios for USV test navigation. Second, the deep learning model, based on a dilated convolutional neural network, is set and trained. Dilated convolution allows the whole network to learn deep features with increased receptive field and further improves the performance of obstacle type recognition. Third, a series of evaluation parameters are utilised to evaluate the obtained model, such as the mean average precision (mAP), missing rate and detection speed. Finally, some experiments are designed to verify the accuracy of the proposed approach using visual images in a cluttered environment. Experimental results demonstrate that the dilated convolutional neural network obtains better recognition performance than the other methods, with an mAP of 88%.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Autonomous navigation</subject><subject>Convolution</subject><subject>Deep learning</subject><subject>Detection</subject><subject>Efficiency</subject><subject>Methods</subject><subject>Navigation</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Obstacle avoidance</subject><subject>Path planning</subject><subject>Performance enhancement</subject><subject>Sensors</subject><subject>Surface vehicles</subject><subject>Surveillance</subject><subject>Unmanned vehicles</subject><subject>Vehicles</subject><issn>0373-4633</issn><issn>1469-7785</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1kM1LwzAYxoMoOKd_gLeA52rSpE16lOEXDHZQzyVJ387MLtmSdrL_3tQNPIinh5fn97xfCF1TcksJFXevhAnGS8ZySgipOD1BE8rLKhNCFqdoMtrZ6J-jixhXiZFcFhO0XejYK9MB7vcbwAGMXzrbW--wdXhn46A6bNdqCTFVCje2Uz002Hi3890wgglwMIQf6b98-MStD3hwa-VcIuMQWmUA7-DDpjnxEp21qotwddQpen98eJs9Z_PF08vsfp4ZRkWfFUwJnnNOidaVLFupqQCmtBSClA1IYpiQtJKs4gaEqIhqi1YbXeZEgGCUTdHNoe8m-O0Asa9Xfghp21jnZZETynPBE0UPlAk-xgBtvQnp3LCvKanHz9Z_Ppsy7JhRax1ss4Tf1v-nvgEGxnvh</recordid><startdate>20220301</startdate><enddate>20220301</enddate><creator>Shi, Binghua</creator><creator>Su, Yixin</creator><creator>Lian, Cheng</creator><creator>Xiong, Chang</creator><creator>Long, Yang</creator><creator>Gong, Chenglong</creator><general>Cambridge University Press</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7SP</scope><scope>7TN</scope><scope>7XB</scope><scope>88I</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F1W</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>H8D</scope><scope>H96</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>KR7</scope><scope>L.G</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M2P</scope><scope>M7S</scope><scope>PCBAR</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-8554-0350</orcidid><orcidid>https://orcid.org/0000-0003-4469-5759</orcidid></search><sort><creationdate>20220301</creationdate><title>Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles</title><author>Shi, Binghua ; Su, Yixin ; Lian, Cheng ; Xiong, Chang ; Long, Yang ; Gong, Chenglong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c317t-53a7424410bb986f8b17e3ab87706de80c378198394ce7790af5fbcb6207e7313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Autonomous navigation</topic><topic>Convolution</topic><topic>Deep learning</topic><topic>Detection</topic><topic>Efficiency</topic><topic>Methods</topic><topic>Navigation</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Obstacle avoidance</topic><topic>Path planning</topic><topic>Performance enhancement</topic><topic>Sensors</topic><topic>Surface vehicles</topic><topic>Surveillance</topic><topic>Unmanned vehicles</topic><topic>Vehicles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shi, Binghua</creatorcontrib><creatorcontrib>Su, Yixin</creatorcontrib><creatorcontrib>Lian, Cheng</creatorcontrib><creatorcontrib>Xiong, Chang</creatorcontrib><creatorcontrib>Long, Yang</creatorcontrib><creatorcontrib>Gong, Chenglong</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Oceanic Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Earth, Atmospheric & Aquatic Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Science Database</collection><collection>Engineering Database</collection><collection>Earth, Atmospheric & Aquatic Science Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of navigation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Binghua</au><au>Su, Yixin</au><au>Lian, Cheng</au><au>Xiong, Chang</au><au>Long, Yang</au><au>Gong, Chenglong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles</atitle><jtitle>Journal of navigation</jtitle><addtitle>J. Navigation</addtitle><date>2022-03-01</date><risdate>2022</risdate><volume>75</volume><issue>2</issue><spage>437</spage><epage>454</epage><pages>437-454</pages><issn>0373-4633</issn><eissn>1469-7785</eissn><abstract>Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a cluttered environment. This work proposes a novel obstacle type recognition approach that combines a dilated operator with the deep-level features map of ResNet50 for autonomous navigation. First, visual images are collected and annotated from various different scenarios for USV test navigation. Second, the deep learning model, based on a dilated convolutional neural network, is set and trained. Dilated convolution allows the whole network to learn deep features with increased receptive field and further improves the performance of obstacle type recognition. Third, a series of evaluation parameters are utilised to evaluate the obtained model, such as the mean average precision (mAP), missing rate and detection speed. Finally, some experiments are designed to verify the accuracy of the proposed approach using visual images in a cluttered environment. Experimental results demonstrate that the dilated convolutional neural network obtains better recognition performance than the other methods, with an mAP of 88%.</abstract><cop>Cambridge, UK</cop><pub>Cambridge University Press</pub><doi>10.1017/S0373463321000941</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0002-8554-0350</orcidid><orcidid>https://orcid.org/0000-0003-4469-5759</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0373-4633 |
ispartof | Journal of navigation, 2022-03, Vol.75 (2), p.437-454 |
issn | 0373-4633 1469-7785 |
language | eng |
recordid | cdi_proquest_journals_2652014274 |
source | Cambridge University Press Journals Complete |
subjects | Accuracy Artificial neural networks Autonomous navigation Convolution Deep learning Detection Efficiency Methods Navigation Neural networks Object recognition Obstacle avoidance Path planning Performance enhancement Sensors Surface vehicles Surveillance Unmanned vehicles Vehicles |
title | Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T21%3A43%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Obstacle%20type%20recognition%20in%20visual%20images%20via%20dilated%20convolutional%20neural%20network%20for%20unmanned%20surface%20vehicles&rft.jtitle=Journal%20of%20navigation&rft.au=Shi,%20Binghua&rft.date=2022-03-01&rft.volume=75&rft.issue=2&rft.spage=437&rft.epage=454&rft.pages=437-454&rft.issn=0373-4633&rft.eissn=1469-7785&rft_id=info:doi/10.1017/S0373463321000941&rft_dat=%3Cproquest_cross%3E2652014274%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2652014274&rft_id=info:pmid/&rft_cupid=10_1017_S0373463321000941&rfr_iscdi=true |