YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning
Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2024-12, Vol.13 (24), p.5049 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 24 |
container_start_page | 5049 |
container_title | Electronics (Basel) |
container_volume | 13 |
creator | Jing, Zhiyong Li, Sen Zhang, Qiuwen |
description | Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to detect images captured in such environments. In response to the problem, our research put forth a detection approach grounded in the YOLOv8 model, which we named YOLOv8-STE. Specifically, we introduced a new detection module, ST, on the basis of YOLOv8, which integrates global information step-by-step through window movement while capturing local details. This is particularly important in adverse weather conditions and effectively enhances detection accuracy. Additionally, an EMA mechanism was incorporated into the neck network, which reduced computational burdens through streamlined operations and enriched the original features, making them more hierarchical, thus improving detection stability and generalization. Finally, soft-NMS was used to replace the traditional non-maximum suppression method. Experimental results indicate that our proposed YOLOv8-STE demonstrates excellent performance under adverse weather conditions. Compared to the baseline model YOLOv8, it exhibits superior results on the RTTS dataset, providing a more efficient method for object detection in adverse weather. |
doi_str_mv | 10.3390/electronics13245049 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3149598319</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A821763416</galeid><sourcerecordid>A821763416</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1399-d75c4aa76e66258f0056c24c77c3d09ccd9f9f403b5efb2f8b64c63f1ba0d0413</originalsourceid><addsrcrecordid>eNptUctKAzEUHUTBUvsFbgKupyaTzCPuSq0PGBjBFnE1ZDI3bUonqcm04t-bUkGF3ru4r3POXZwouiZ4TCnHt7AB2TtrtPSEJizFjJ9FgwTnPOYJT87_9JfRyPs1DsEJLSgeROq9Kqt9Eb_OZ3doZlbCSG2WqGrWQRTdQx-Ktga9gFPWdeEMaGFacGjS7sF5QG8g-lWYp9a0-oD16FP3q8CFLSpBOBMEr6ILJTYeRj91GC0eZvPpU1xWj8_TSRlLQjmP2zyVTIg8gyxL0kJhnGYyYTLPJW0xl7LliiuGaZOCahJVNBmTGVWkEbjFjNBhdHPU3Tr7sQPf12u7cya8rClhPOUFJfwXtRQbqLVRtndCdtrLelIkJM8oI1lAjU-gQrbQaWkNKB32_wj0SJDOeu9A1VunO-G-aoLrg1X1CavoN--siD0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149598319</pqid></control><display><type>article</type><title>YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning</title><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>MDPI - Multidisciplinary Digital Publishing Institute</source><creator>Jing, Zhiyong ; Li, Sen ; Zhang, Qiuwen</creator><creatorcontrib>Jing, Zhiyong ; Li, Sen ; Zhang, Qiuwen</creatorcontrib><description>Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to detect images captured in such environments. In response to the problem, our research put forth a detection approach grounded in the YOLOv8 model, which we named YOLOv8-STE. Specifically, we introduced a new detection module, ST, on the basis of YOLOv8, which integrates global information step-by-step through window movement while capturing local details. This is particularly important in adverse weather conditions and effectively enhances detection accuracy. Additionally, an EMA mechanism was incorporated into the neck network, which reduced computational burdens through streamlined operations and enriched the original features, making them more hierarchical, thus improving detection stability and generalization. Finally, soft-NMS was used to replace the traditional non-maximum suppression method. Experimental results indicate that our proposed YOLOv8-STE demonstrates excellent performance under adverse weather conditions. Compared to the baseline model YOLOv8, it exhibits superior results on the RTTS dataset, providing a more efficient method for object detection in adverse weather.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13245049</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Automation ; Classification ; Deep learning ; Efficiency ; Image detection ; Image quality ; Localization ; Medical imaging equipment ; Motion perception ; Neural networks ; Telematics ; Weather</subject><ispartof>Electronics (Basel), 2024-12, Vol.13 (24), p.5049</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1399-d75c4aa76e66258f0056c24c77c3d09ccd9f9f403b5efb2f8b64c63f1ba0d0413</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Jing, Zhiyong</creatorcontrib><creatorcontrib>Li, Sen</creatorcontrib><creatorcontrib>Zhang, Qiuwen</creatorcontrib><title>YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning</title><title>Electronics (Basel)</title><description>Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to detect images captured in such environments. In response to the problem, our research put forth a detection approach grounded in the YOLOv8 model, which we named YOLOv8-STE. Specifically, we introduced a new detection module, ST, on the basis of YOLOv8, which integrates global information step-by-step through window movement while capturing local details. This is particularly important in adverse weather conditions and effectively enhances detection accuracy. Additionally, an EMA mechanism was incorporated into the neck network, which reduced computational burdens through streamlined operations and enriched the original features, making them more hierarchical, thus improving detection stability and generalization. Finally, soft-NMS was used to replace the traditional non-maximum suppression method. Experimental results indicate that our proposed YOLOv8-STE demonstrates excellent performance under adverse weather conditions. Compared to the baseline model YOLOv8, it exhibits superior results on the RTTS dataset, providing a more efficient method for object detection in adverse weather.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Automation</subject><subject>Classification</subject><subject>Deep learning</subject><subject>Efficiency</subject><subject>Image detection</subject><subject>Image quality</subject><subject>Localization</subject><subject>Medical imaging equipment</subject><subject>Motion perception</subject><subject>Neural networks</subject><subject>Telematics</subject><subject>Weather</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUctKAzEUHUTBUvsFbgKupyaTzCPuSq0PGBjBFnE1ZDI3bUonqcm04t-bUkGF3ru4r3POXZwouiZ4TCnHt7AB2TtrtPSEJizFjJ9FgwTnPOYJT87_9JfRyPs1DsEJLSgeROq9Kqt9Eb_OZ3doZlbCSG2WqGrWQRTdQx-Ktga9gFPWdeEMaGFacGjS7sF5QG8g-lWYp9a0-oD16FP3q8CFLSpBOBMEr6ILJTYeRj91GC0eZvPpU1xWj8_TSRlLQjmP2zyVTIg8gyxL0kJhnGYyYTLPJW0xl7LliiuGaZOCahJVNBmTGVWkEbjFjNBhdHPU3Tr7sQPf12u7cya8rClhPOUFJfwXtRQbqLVRtndCdtrLelIkJM8oI1lAjU-gQrbQaWkNKB32_wj0SJDOeu9A1VunO-G-aoLrg1X1CavoN--siD0</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Jing, Zhiyong</creator><creator>Li, Sen</creator><creator>Zhang, Qiuwen</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20241201</creationdate><title>YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning</title><author>Jing, Zhiyong ; Li, Sen ; Zhang, Qiuwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1399-d75c4aa76e66258f0056c24c77c3d09ccd9f9f403b5efb2f8b64c63f1ba0d0413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Automation</topic><topic>Classification</topic><topic>Deep learning</topic><topic>Efficiency</topic><topic>Image detection</topic><topic>Image quality</topic><topic>Localization</topic><topic>Medical imaging equipment</topic><topic>Motion perception</topic><topic>Neural networks</topic><topic>Telematics</topic><topic>Weather</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jing, Zhiyong</creatorcontrib><creatorcontrib>Li, Sen</creatorcontrib><creatorcontrib>Zhang, Qiuwen</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jing, Zhiyong</au><au>Li, Sen</au><au>Zhang, Qiuwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-12-01</date><risdate>2024</risdate><volume>13</volume><issue>24</issue><spage>5049</spage><pages>5049-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Object detection powered by deep learning is extensively utilized across diverse sectors, yielding substantial outcomes. However, adverse weather conditions such as rain, snow, and haze interfere with images, leading to a decline in quality and making it extremely challenging for existing methods to detect images captured in such environments. In response to the problem, our research put forth a detection approach grounded in the YOLOv8 model, which we named YOLOv8-STE. Specifically, we introduced a new detection module, ST, on the basis of YOLOv8, which integrates global information step-by-step through window movement while capturing local details. This is particularly important in adverse weather conditions and effectively enhances detection accuracy. Additionally, an EMA mechanism was incorporated into the neck network, which reduced computational burdens through streamlined operations and enriched the original features, making them more hierarchical, thus improving detection stability and generalization. Finally, soft-NMS was used to replace the traditional non-maximum suppression method. Experimental results indicate that our proposed YOLOv8-STE demonstrates excellent performance under adverse weather conditions. Compared to the baseline model YOLOv8, it exhibits superior results on the RTTS dataset, providing a more efficient method for object detection in adverse weather.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13245049</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2024-12, Vol.13 (24), p.5049 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_3149598319 |
source | Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; MDPI - Multidisciplinary Digital Publishing Institute |
subjects | Accuracy Algorithms Automation Classification Deep learning Efficiency Image detection Image quality Localization Medical imaging equipment Motion perception Neural networks Telematics Weather |
title | YOLOv8-STE: Enhancing Object Detection Performance Under Adverse Weather Conditions with Deep Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T03%3A01%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=YOLOv8-STE:%20Enhancing%20Object%20Detection%20Performance%20Under%20Adverse%20Weather%20Conditions%20with%20Deep%20Learning&rft.jtitle=Electronics%20(Basel)&rft.au=Jing,%20Zhiyong&rft.date=2024-12-01&rft.volume=13&rft.issue=24&rft.spage=5049&rft.pages=5049-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13245049&rft_dat=%3Cgale_proqu%3EA821763416%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3149598319&rft_id=info:pmid/&rft_galeid=A821763416&rfr_iscdi=true |