Advanced Multi-Label Fire Scene Image Classification via BiFormer, Domain-Adversarial Network and GCN

Detecting wildfires presents significant challenges due to the presence of various potential targets in fire imagery, such as smoke, vehicles, and people. To address these challenges, we propose a novel multi-label classification model based on BiFormer’s feature extraction method, which constructs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Fire (Basel, Switzerland) Switzerland), 2024-09, Vol.7 (9), p.322
Hauptverfasser: Bai, Yu, Wang, Dan, Li, Qingliang, Liu, Taihui, Ji, Yuheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Detecting wildfires presents significant challenges due to the presence of various potential targets in fire imagery, such as smoke, vehicles, and people. To address these challenges, we propose a novel multi-label classification model based on BiFormer’s feature extraction method, which constructs sparse region-indexing relations and performs feature extraction only in key regions, thereby facilitating more effective capture of flame characteristics. Additionally, we introduce a feature screening method based on a domain-adversarial neural network (DANN) to minimize misclassification by accurately determining feature domains. Furthermore, a feature discrimination method utilizing a Graph Convolutional Network (GCN) is proposed, enabling the model to capture label correlations more effectively and improve performance by constructing a label correlation matrix. This model enhances cross-domain generalization capability and improves recognition performance in fire scenarios. In the experimental phase, we developed a comprehensive dataset by integrating multiple fire-related public datasets, and conducted detailed comparison and ablation experiments. Results from the tenfold cross-validation demonstrate that the proposed model significantly improves recognition of multi-labeled images in fire scenarios. Compared with the baseline model, the mAP increased by 4.426%, CP by 4.14% and CF1 by 7.04%.
ISSN:2571-6255
2571-6255
DOI:10.3390/fire7090322