Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3
Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new i...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in applied earth observations and remote sensing 2021, Vol.14, p.6083-6101 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6101 |
---|---|
container_issue | |
container_start_page | 6083 |
container_title | IEEE journal of selected topics in applied earth observations and remote sensing |
container_volume | 14 |
creator | Hong, Zhonghua Yang, Ting Tong, Xiaohua Zhang, Yun Jiang, Shenlu Zhou, Ruyan Han, Yanling Wang, Jing Yang, Shuhu Liu, Sichong |
description | Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new improvement on the "you only look once" version 3 (YOLOv3) framework for ship detection in marine surveillance, based on synthetic aperture radar (SAR) and optical imagery. First, improved choices are obtained for the anchor boxes by using linear scaling based on the k-means++ algorithm. This addresses the difficulty in reflecting the advantages of YOLOv3's multiscale detection, as the anchor boxes of a single detection target type between different detection scales have small differences. Second, we add uncertainty estimators for the positioning of the bounding boxes by introducing a Gaussian parameter for ship detection into the YOLOv3 framework. Finally, four anchor boxes are allocated to each detection scale in the Gaussian-YOLO layer instead of three as in the default YOLOv3 settings, as there are wide disparities in an object's size and direction in remote sensing images with different resolutions. Applying the proposed strategy to ``YOLOv3-spp" and ``YOLOv3-tiny," the results are enhanced by 2%-3%. Compared with other models, the improved-YOLOv3 has the highest average precision on both the optical (93.56%) and SAR (95.52%) datasets. The improved-YOLOv3 is robust, even in the context of a mixed dataset of SAR and optical images comprising images from different satellites and with different scales. |
doi_str_mv | 10.1109/JSTARS.2021.3087555 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2547644775</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9448440</ieee_id><doaj_id>oai_doaj_org_article_bf97c5833c7a4f298bf1f0206315fc55</doaj_id><sourcerecordid>2547644775</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-d08398bbf063c646c04c0cc22fe04cdc9c3a3209e1ad7139cd6fdf1be856c4e93</originalsourceid><addsrcrecordid>eNo9kUtLw0AQxxdRsD4-gZcFz6n7zGaPwWelUmhU8LRsJ7Oa0nbrJhX89qZGPM0w_B8DP0IuOBtzzuzVY_VczquxYIKPJSuM1vqAjATXPONa6kMy4lbajCumjslJ2y4Zy4WxckTmT7tV12QV-BXS6qPZ0hvsELombuhdimtalXPqNzWdbbumF9HJ2r9j-qavjaclfYoJaQmwS75D-jabzr7kGTkKftXi-d88JS93t8_XD9l0dj-5LqcZKKO6rGaFtMViEVguIVc5MAUMQIiA_VaDBemlYBa5rw2XFuo81IEvsNA5KLTylEyG3Dr6pdumZu3Tt4u-cb-HmN6dT_3TK3SLYA3oQkowXgXR1wYemOibuQ6gdZ91OWRtU_zcYdu5ZdylTf--E1qZXClj9io5qCDFtk0Y_ls5c3sQbgDh9iDcH4jedTG4GkT8d1ilCqWY_AELvIJc</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2547644775</pqid></control><display><type>article</type><title>Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3</title><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Hong, Zhonghua ; Yang, Ting ; Tong, Xiaohua ; Zhang, Yun ; Jiang, Shenlu ; Zhou, Ruyan ; Han, Yanling ; Wang, Jing ; Yang, Shuhu ; Liu, Sichong</creator><creatorcontrib>Hong, Zhonghua ; Yang, Ting ; Tong, Xiaohua ; Zhang, Yun ; Jiang, Shenlu ; Zhou, Ruyan ; Han, Yanling ; Wang, Jing ; Yang, Shuhu ; Liu, Sichong</creatorcontrib><description>Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new improvement on the "you only look once" version 3 (YOLOv3) framework for ship detection in marine surveillance, based on synthetic aperture radar (SAR) and optical imagery. First, improved choices are obtained for the anchor boxes by using linear scaling based on the k-means++ algorithm. This addresses the difficulty in reflecting the advantages of YOLOv3's multiscale detection, as the anchor boxes of a single detection target type between different detection scales have small differences. Second, we add uncertainty estimators for the positioning of the bounding boxes by introducing a Gaussian parameter for ship detection into the YOLOv3 framework. Finally, four anchor boxes are allocated to each detection scale in the Gaussian-YOLO layer instead of three as in the default YOLOv3 settings, as there are wide disparities in an object's size and direction in remote sensing images with different resolutions. Applying the proposed strategy to ``YOLOv3-spp" and ``YOLOv3-tiny," the results are enhanced by 2%-3%. Compared with other models, the improved-YOLOv3 has the highest average precision on both the optical (93.56%) and SAR (95.52%) datasets. The improved-YOLOv3 is robust, even in the context of a mixed dataset of SAR and optical images comprising images from different satellites and with different scales.</description><identifier>ISSN: 1939-1404</identifier><identifier>EISSN: 2151-1535</identifier><identifier>DOI: 10.1109/JSTARS.2021.3087555</identifier><identifier>CODEN: IJSTHZ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptive optics ; Algorithms ; Boxes ; Datasets ; Deep learning-based object detection ; Detection ; Imagery ; Machine learning ; Marine vehicles ; Methods ; Object detection ; Object recognition ; Optical imaging ; Optical reflection ; Optical sensors ; Remote sensing ; SAR (radar) ; Satellite imagery ; Scaling ; ship detection ; Synthetic aperture radar ; synthetic aperture radar (SAR) and optical imagery ; Target detection ; “you only look once”version 3 (YOLOv3)</subject><ispartof>IEEE journal of selected topics in applied earth observations and remote sensing, 2021, Vol.14, p.6083-6101</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-d08398bbf063c646c04c0cc22fe04cdc9c3a3209e1ad7139cd6fdf1be856c4e93</citedby><cites>FETCH-LOGICAL-c474t-d08398bbf063c646c04c0cc22fe04cdc9c3a3209e1ad7139cd6fdf1be856c4e93</cites><orcidid>0000-0003-4367-8674 ; 0000-0001-9967-7756 ; 0000-0002-1045-3797 ; 0000-0003-0045-1066 ; 0000-0003-1612-4844</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,860,2095,4009,27902,27903,27904</link.rule.ids></links><search><creatorcontrib>Hong, Zhonghua</creatorcontrib><creatorcontrib>Yang, Ting</creatorcontrib><creatorcontrib>Tong, Xiaohua</creatorcontrib><creatorcontrib>Zhang, Yun</creatorcontrib><creatorcontrib>Jiang, Shenlu</creatorcontrib><creatorcontrib>Zhou, Ruyan</creatorcontrib><creatorcontrib>Han, Yanling</creatorcontrib><creatorcontrib>Wang, Jing</creatorcontrib><creatorcontrib>Yang, Shuhu</creatorcontrib><creatorcontrib>Liu, Sichong</creatorcontrib><title>Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3</title><title>IEEE journal of selected topics in applied earth observations and remote sensing</title><addtitle>JSTARS</addtitle><description>Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new improvement on the "you only look once" version 3 (YOLOv3) framework for ship detection in marine surveillance, based on synthetic aperture radar (SAR) and optical imagery. First, improved choices are obtained for the anchor boxes by using linear scaling based on the k-means++ algorithm. This addresses the difficulty in reflecting the advantages of YOLOv3's multiscale detection, as the anchor boxes of a single detection target type between different detection scales have small differences. Second, we add uncertainty estimators for the positioning of the bounding boxes by introducing a Gaussian parameter for ship detection into the YOLOv3 framework. Finally, four anchor boxes are allocated to each detection scale in the Gaussian-YOLO layer instead of three as in the default YOLOv3 settings, as there are wide disparities in an object's size and direction in remote sensing images with different resolutions. Applying the proposed strategy to ``YOLOv3-spp" and ``YOLOv3-tiny," the results are enhanced by 2%-3%. Compared with other models, the improved-YOLOv3 has the highest average precision on both the optical (93.56%) and SAR (95.52%) datasets. The improved-YOLOv3 is robust, even in the context of a mixed dataset of SAR and optical images comprising images from different satellites and with different scales.</description><subject>Adaptive optics</subject><subject>Algorithms</subject><subject>Boxes</subject><subject>Datasets</subject><subject>Deep learning-based object detection</subject><subject>Detection</subject><subject>Imagery</subject><subject>Machine learning</subject><subject>Marine vehicles</subject><subject>Methods</subject><subject>Object detection</subject><subject>Object recognition</subject><subject>Optical imaging</subject><subject>Optical reflection</subject><subject>Optical sensors</subject><subject>Remote sensing</subject><subject>SAR (radar)</subject><subject>Satellite imagery</subject><subject>Scaling</subject><subject>ship detection</subject><subject>Synthetic aperture radar</subject><subject>synthetic aperture radar (SAR) and optical imagery</subject><subject>Target detection</subject><subject>“you only look once”version 3 (YOLOv3)</subject><issn>1939-1404</issn><issn>2151-1535</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNo9kUtLw0AQxxdRsD4-gZcFz6n7zGaPwWelUmhU8LRsJ7Oa0nbrJhX89qZGPM0w_B8DP0IuOBtzzuzVY_VczquxYIKPJSuM1vqAjATXPONa6kMy4lbajCumjslJ2y4Zy4WxckTmT7tV12QV-BXS6qPZ0hvsELombuhdimtalXPqNzWdbbumF9HJ2r9j-qavjaclfYoJaQmwS75D-jabzr7kGTkKftXi-d88JS93t8_XD9l0dj-5LqcZKKO6rGaFtMViEVguIVc5MAUMQIiA_VaDBemlYBa5rw2XFuo81IEvsNA5KLTylEyG3Dr6pdumZu3Tt4u-cb-HmN6dT_3TK3SLYA3oQkowXgXR1wYemOibuQ6gdZ91OWRtU_zcYdu5ZdylTf--E1qZXClj9io5qCDFtk0Y_ls5c3sQbgDh9iDcH4jedTG4GkT8d1ilCqWY_AELvIJc</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Hong, Zhonghua</creator><creator>Yang, Ting</creator><creator>Tong, Xiaohua</creator><creator>Zhang, Yun</creator><creator>Jiang, Shenlu</creator><creator>Zhou, Ruyan</creator><creator>Han, Yanling</creator><creator>Wang, Jing</creator><creator>Yang, Shuhu</creator><creator>Liu, Sichong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-4367-8674</orcidid><orcidid>https://orcid.org/0000-0001-9967-7756</orcidid><orcidid>https://orcid.org/0000-0002-1045-3797</orcidid><orcidid>https://orcid.org/0000-0003-0045-1066</orcidid><orcidid>https://orcid.org/0000-0003-1612-4844</orcidid></search><sort><creationdate>2021</creationdate><title>Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3</title><author>Hong, Zhonghua ; Yang, Ting ; Tong, Xiaohua ; Zhang, Yun ; Jiang, Shenlu ; Zhou, Ruyan ; Han, Yanling ; Wang, Jing ; Yang, Shuhu ; Liu, Sichong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-d08398bbf063c646c04c0cc22fe04cdc9c3a3209e1ad7139cd6fdf1be856c4e93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptive optics</topic><topic>Algorithms</topic><topic>Boxes</topic><topic>Datasets</topic><topic>Deep learning-based object detection</topic><topic>Detection</topic><topic>Imagery</topic><topic>Machine learning</topic><topic>Marine vehicles</topic><topic>Methods</topic><topic>Object detection</topic><topic>Object recognition</topic><topic>Optical imaging</topic><topic>Optical reflection</topic><topic>Optical sensors</topic><topic>Remote sensing</topic><topic>SAR (radar)</topic><topic>Satellite imagery</topic><topic>Scaling</topic><topic>ship detection</topic><topic>Synthetic aperture radar</topic><topic>synthetic aperture radar (SAR) and optical imagery</topic><topic>Target detection</topic><topic>“you only look once”version 3 (YOLOv3)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hong, Zhonghua</creatorcontrib><creatorcontrib>Yang, Ting</creatorcontrib><creatorcontrib>Tong, Xiaohua</creatorcontrib><creatorcontrib>Zhang, Yun</creatorcontrib><creatorcontrib>Jiang, Shenlu</creatorcontrib><creatorcontrib>Zhou, Ruyan</creatorcontrib><creatorcontrib>Han, Yanling</creatorcontrib><creatorcontrib>Wang, Jing</creatorcontrib><creatorcontrib>Yang, Shuhu</creatorcontrib><creatorcontrib>Liu, Sichong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE journal of selected topics in applied earth observations and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hong, Zhonghua</au><au>Yang, Ting</au><au>Tong, Xiaohua</au><au>Zhang, Yun</au><au>Jiang, Shenlu</au><au>Zhou, Ruyan</au><au>Han, Yanling</au><au>Wang, Jing</au><au>Yang, Shuhu</au><au>Liu, Sichong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3</atitle><jtitle>IEEE journal of selected topics in applied earth observations and remote sensing</jtitle><stitle>JSTARS</stitle><date>2021</date><risdate>2021</risdate><volume>14</volume><spage>6083</spage><epage>6101</epage><pages>6083-6101</pages><issn>1939-1404</issn><eissn>2151-1535</eissn><coden>IJSTHZ</coden><abstract>Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new improvement on the "you only look once" version 3 (YOLOv3) framework for ship detection in marine surveillance, based on synthetic aperture radar (SAR) and optical imagery. First, improved choices are obtained for the anchor boxes by using linear scaling based on the k-means++ algorithm. This addresses the difficulty in reflecting the advantages of YOLOv3's multiscale detection, as the anchor boxes of a single detection target type between different detection scales have small differences. Second, we add uncertainty estimators for the positioning of the bounding boxes by introducing a Gaussian parameter for ship detection into the YOLOv3 framework. Finally, four anchor boxes are allocated to each detection scale in the Gaussian-YOLO layer instead of three as in the default YOLOv3 settings, as there are wide disparities in an object's size and direction in remote sensing images with different resolutions. Applying the proposed strategy to ``YOLOv3-spp" and ``YOLOv3-tiny," the results are enhanced by 2%-3%. Compared with other models, the improved-YOLOv3 has the highest average precision on both the optical (93.56%) and SAR (95.52%) datasets. The improved-YOLOv3 is robust, even in the context of a mixed dataset of SAR and optical images comprising images from different satellites and with different scales.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JSTARS.2021.3087555</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0003-4367-8674</orcidid><orcidid>https://orcid.org/0000-0001-9967-7756</orcidid><orcidid>https://orcid.org/0000-0002-1045-3797</orcidid><orcidid>https://orcid.org/0000-0003-0045-1066</orcidid><orcidid>https://orcid.org/0000-0003-1612-4844</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1939-1404 |
ispartof | IEEE journal of selected topics in applied earth observations and remote sensing, 2021, Vol.14, p.6083-6101 |
issn | 1939-1404 2151-1535 |
language | eng |
recordid | cdi_proquest_journals_2547644775 |
source | DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Adaptive optics Algorithms Boxes Datasets Deep learning-based object detection Detection Imagery Machine learning Marine vehicles Methods Object detection Object recognition Optical imaging Optical reflection Optical sensors Remote sensing SAR (radar) Satellite imagery Scaling ship detection Synthetic aperture radar synthetic aperture radar (SAR) and optical imagery Target detection “you only look once”version 3 (YOLOv3) |
title | Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3 |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T13%3A40%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Scale%20Ship%20Detection%20From%20SAR%20and%20Optical%20Imagery%20Via%20A%20More%20Accurate%20YOLOv3&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20applied%20earth%20observations%20and%20remote%20sensing&rft.au=Hong,%20Zhonghua&rft.date=2021&rft.volume=14&rft.spage=6083&rft.epage=6101&rft.pages=6083-6101&rft.issn=1939-1404&rft.eissn=2151-1535&rft.coden=IJSTHZ&rft_id=info:doi/10.1109/JSTARS.2021.3087555&rft_dat=%3Cproquest_cross%3E2547644775%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2547644775&rft_id=info:pmid/&rft_ieee_id=9448440&rft_doaj_id=oai_doaj_org_article_bf97c5833c7a4f298bf1f0206315fc55&rfr_iscdi=true |