Fire Detection in Urban Areas Using Multimodal Data and Federated Learning

Fire chemical sensing for indoor detection of fire plays an essential role because it can detect chemical volatiles before smoke particles, providing a faster and more reliable method for early fire detection. A thermal imaging camera and seven distinct fire-detecting sensors were used simultaneousl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Fire (Basel, Switzerland) Switzerland), 2024-04, Vol.7 (4), p.104
Hauptverfasser: Sharma, Ashutosh, Kumar, Rajeev, Kansal, Isha, Popli, Renu, Khullar, Vikas, Verma, Jyoti, Kumar, Sunil
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Fire chemical sensing for indoor detection of fire plays an essential role because it can detect chemical volatiles before smoke particles, providing a faster and more reliable method for early fire detection. A thermal imaging camera and seven distinct fire-detecting sensors were used simultaneously to acquire the multimodal fire data that is the subject of this paper. The low-cost sensors typically have lower sensitivity and reliability, making it impossible for them to detect fire at greater distances. To go beyond the limitation of using solely sensors for identifying fire, the multimodal dataset is collected using a thermal camera that can detect temperature changes. The proposed pipeline uses image data from thermal cameras to train convolutional neural networks (CNNs) and their many versions. The training of sensors data (from fire sensors) uses bidirectional long-short memory (BiLSTM-Dense) and dense and long-short memory (LSTM-DenseDenseNet201), and the merging of both datasets demonstrates the performance of multimodal data. Researchers and system developers can use the dataset to create and hone cutting-edge artificial intelligence models and systems. Initial evaluation of the image dataset has shown densenet201 as the best approach with the highest validation parameters (0.99, 0.99, 0.99, and 0.08), i.e., Accuracy, Precision, Recall, and Loss, respectively. However, the sensors dataset has also shown the highest parameters with the BILSTM-Dense approach (0.95, 0.95, 0.95, 0.14). In a multimodal data approach, image and sensors deployed with a multimodal algorithm (densenet201 for image data and Bi LSTM- Dense for Sensors Data) has shown other parameters (1.0, 1.0, 1.0, 0.06). This work demonstrates that, in comparison to the conventional deep learning approach, the federated learning (FL) approach performs privacy-protected fire leakage classification without significantly sacrificing accuracy and other validation parameters.
ISSN:2571-6255
2571-6255
DOI:10.3390/fire7040104