Deep learning assisted portable IR active imaging sensor spots and identifies live humans through fire
•Portable active Infrared imaging sensor can spot static or moving targets through smoke and flames.•Deep learning architecture is used to automatically identify the presence of people through fire.•Field portability, real-time imaging and unassisted image analysis are demonstrated.•The device could...
Gespeichert in:
Veröffentlicht in: | Optics and lasers in engineering 2020-01, Vol.124, p.105818, Article 105818 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Portable active Infrared imaging sensor can spot static or moving targets through smoke and flames.•Deep learning architecture is used to automatically identify the presence of people through fire.•Field portability, real-time imaging and unassisted image analysis are demonstrated.•The device could be employed by first responders on the fire scene.•The device could be a new video surveillance system providing enhanced vision through obscurants.
Achieving clear imaging through fire is a highly pursued goal and various active field-portable devices have been recently proposed to improve the capabilities of existing thermographic cameras. Here we combine an Infrared active imaging sensor and artificial intelligence to obtain automatic detection of people hidden behind flames. We show the successful use of a pre-trained Convolutional Neural Network in recognizing a static or moving person through fire when this is imaged by the proposed system. Remarkably, the network is able to detect the person even in the case the imaging system we propose cannot reject the flame disturbance in full, thus improving its robustness. These results pave the way to the development of automatic surveillance systems able to generate alerts in the case a fire spreads and persons are detected inside rooms invaded by flames, without relying on the subjective human interpretation of the videos. |
---|---|
ISSN: | 0143-8166 1873-0302 |
DOI: | 10.1016/j.optlaseng.2019.105818 |