A forest fire detection method based on improved YOLOv5

With the continuous intensification of global climate change, forest fires have become a significant threat to natural ecosystems and human society. The automatic fire detection system plays a crucial role in the early discovery of forest fires. Current YOLOv5-based fire detection methods encounter...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing image and video processing, 2025-01, Vol.19 (2), Article 136
Hauptverfasser: Sun, Zukai, Xu, Ruzhi, Zheng, Xiangwei, Zhang, Lifeng, Zhang, Yuang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the continuous intensification of global climate change, forest fires have become a significant threat to natural ecosystems and human society. The automatic fire detection system plays a crucial role in the early discovery of forest fires. Current YOLOv5-based fire detection methods encounter several significant challenges: low accuracy and high miss rates in complex backgrounds, inefficiency in real-time applications, and difficulty in detecting small targets, particularly in the early stages of a fire. To address these issues, we propose a forest fire detection method based on improved YOLOv5, aimed at achieving efficient real-time monitoring in resource-constrained environments. First, we add the Convolutional Block Attention Module to improve channel and spatial attention, enhancing the detection of small fire features essential for early detection. Next, we integrate a small target detection layer and the Ghost module into YOLOv5. The small target layer boosts sensitivity to small fire areas, while the Ghost module reduces computational load and parameters, improving feature extraction without sacrificing performance. Finally, we use the SIOU loss function to accelerate model convergence, enhancing overall detection efficiency and precision. Experimental results show that the proposed method achieves an mAP of 88.3% on the Yang et al. dataset, which improves the mAP by 0.9% compared to other YOLOv5-based methods on the same dataset. Model parameter size decreased by 2.8%. On our forest fire detection dataset, the proposed method achieves an mAP of 79.1%. Compared to the YOLOv5s model, this represents a 3.7% improvement in mAP. Model parameter size decreased by 2.3%.
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-024-03680-6