Fireground location understanding by semantic linking of visual objects and building information models

This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Fire safety journal 2017-07, Vol.91, p.1026-1034
Hauptverfasser: Vandecasteele, Florian, Merci, Bart, Verstockt, Steven
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1034
container_issue
container_start_page 1026
container_title Fire safety journal
container_volume 91
creator Vandecasteele, Florian
Merci, Bart
Verstockt, Steven
description This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi-)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding.
doi_str_mv 10.1016/j.firesaf.2017.03.083
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1950623575</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0379711217301364</els_id><sourcerecordid>1950623575</sourcerecordid><originalsourceid>FETCH-LOGICAL-c384t-8691ca4b14335a19e9f2431078f129eba5421628bcb9b434c2902163fbc1862a3</originalsourceid><addsrcrecordid>eNqFkEtLAzEUhYMoWB8_QQi4njE3mVdWIsWqILjRdUgySck4TWoyU-i_N7Xdu7rcw3fO5R6E7oCUQKB5GErroknSlpRAWxJWko6doQV0LStaSptztCCs5UULQC_RVUoDySAhfIHWq2xdxzD7Ho9By8kFj_NiYpqk751fY7XHyWykn5zGo_PfBy1YvHNpliMOajB6SjjDWM1u_LM4b0PcHMM2oTdjukEXVo7J3J7mNfpaPX8uX4v3j5e35dN7oVlXTUXXcNCyUlAxVkvghltaMSBtZ4Fyo2RdUWhop7TiqmKVppxkgVmloWuoZNfo_pi7jeFnNmkSQ5ijzycF8Jo0lNVtnan6SOkYUorGim10Gxn3Aog4dCoGcepUHDoVhIncafY9Hn35I7NzJoqknfHa9BnWk-iD-yfhF3iug3Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1950623575</pqid></control><display><type>article</type><title>Fireground location understanding by semantic linking of visual objects and building information models</title><source>Access via ScienceDirect (Elsevier)</source><creator>Vandecasteele, Florian ; Merci, Bart ; Verstockt, Steven</creator><creatorcontrib>Vandecasteele, Florian ; Merci, Bart ; Verstockt, Steven</creatorcontrib><description>This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi-)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding.</description><identifier>ISSN: 0379-7112</identifier><identifier>EISSN: 1873-7226</identifier><identifier>DOI: 10.1016/j.firesaf.2017.03.083</identifier><language>eng</language><publisher>Lausanne: Elsevier Ltd</publisher><subject>Building information modeling ; Building information models ; Building management systems ; Buildings ; Cameras ; Computer vision ; Data management ; Fire analysis ; Fire protection ; Fires ; Localization ; Location estimation ; Management tools ; Multi-view sensing ; Object recognition ; Pattern recognition ; Position (location) ; Safety management ; Semantic linking ; Semantics ; Situational awareness ; Visual discrimination learning ; Visual object recognition ; Visual perception ; Workflow</subject><ispartof>Fire safety journal, 2017-07, Vol.91, p.1026-1034</ispartof><rights>2017 Elsevier Ltd</rights><rights>Copyright Elsevier BV Jul 2017</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c384t-8691ca4b14335a19e9f2431078f129eba5421628bcb9b434c2902163fbc1862a3</citedby><cites>FETCH-LOGICAL-c384t-8691ca4b14335a19e9f2431078f129eba5421628bcb9b434c2902163fbc1862a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.firesaf.2017.03.083$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids></links><search><creatorcontrib>Vandecasteele, Florian</creatorcontrib><creatorcontrib>Merci, Bart</creatorcontrib><creatorcontrib>Verstockt, Steven</creatorcontrib><title>Fireground location understanding by semantic linking of visual objects and building information models</title><title>Fire safety journal</title><description>This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi-)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding.</description><subject>Building information modeling</subject><subject>Building information models</subject><subject>Building management systems</subject><subject>Buildings</subject><subject>Cameras</subject><subject>Computer vision</subject><subject>Data management</subject><subject>Fire analysis</subject><subject>Fire protection</subject><subject>Fires</subject><subject>Localization</subject><subject>Location estimation</subject><subject>Management tools</subject><subject>Multi-view sensing</subject><subject>Object recognition</subject><subject>Pattern recognition</subject><subject>Position (location)</subject><subject>Safety management</subject><subject>Semantic linking</subject><subject>Semantics</subject><subject>Situational awareness</subject><subject>Visual discrimination learning</subject><subject>Visual object recognition</subject><subject>Visual perception</subject><subject>Workflow</subject><issn>0379-7112</issn><issn>1873-7226</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNqFkEtLAzEUhYMoWB8_QQi4njE3mVdWIsWqILjRdUgySck4TWoyU-i_N7Xdu7rcw3fO5R6E7oCUQKB5GErroknSlpRAWxJWko6doQV0LStaSptztCCs5UULQC_RVUoDySAhfIHWq2xdxzD7Ho9By8kFj_NiYpqk751fY7XHyWykn5zGo_PfBy1YvHNpliMOajB6SjjDWM1u_LM4b0PcHMM2oTdjukEXVo7J3J7mNfpaPX8uX4v3j5e35dN7oVlXTUXXcNCyUlAxVkvghltaMSBtZ4Fyo2RdUWhop7TiqmKVppxkgVmloWuoZNfo_pi7jeFnNmkSQ5ijzycF8Jo0lNVtnan6SOkYUorGim10Gxn3Aog4dCoGcepUHDoVhIncafY9Hn35I7NzJoqknfHa9BnWk-iD-yfhF3iug3Q</recordid><startdate>201707</startdate><enddate>201707</enddate><creator>Vandecasteele, Florian</creator><creator>Merci, Bart</creator><creator>Verstockt, Steven</creator><general>Elsevier Ltd</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7T2</scope><scope>7TB</scope><scope>8FD</scope><scope>C1K</scope><scope>FR3</scope><scope>KR7</scope></search><sort><creationdate>201707</creationdate><title>Fireground location understanding by semantic linking of visual objects and building information models</title><author>Vandecasteele, Florian ; Merci, Bart ; Verstockt, Steven</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c384t-8691ca4b14335a19e9f2431078f129eba5421628bcb9b434c2902163fbc1862a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Building information modeling</topic><topic>Building information models</topic><topic>Building management systems</topic><topic>Buildings</topic><topic>Cameras</topic><topic>Computer vision</topic><topic>Data management</topic><topic>Fire analysis</topic><topic>Fire protection</topic><topic>Fires</topic><topic>Localization</topic><topic>Location estimation</topic><topic>Management tools</topic><topic>Multi-view sensing</topic><topic>Object recognition</topic><topic>Pattern recognition</topic><topic>Position (location)</topic><topic>Safety management</topic><topic>Semantic linking</topic><topic>Semantics</topic><topic>Situational awareness</topic><topic>Visual discrimination learning</topic><topic>Visual object recognition</topic><topic>Visual perception</topic><topic>Workflow</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Vandecasteele, Florian</creatorcontrib><creatorcontrib>Merci, Bart</creatorcontrib><creatorcontrib>Verstockt, Steven</creatorcontrib><collection>CrossRef</collection><collection>Health and Safety Science Abstracts (Full archive)</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>Engineering Research Database</collection><collection>Civil Engineering Abstracts</collection><jtitle>Fire safety journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Vandecasteele, Florian</au><au>Merci, Bart</au><au>Verstockt, Steven</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fireground location understanding by semantic linking of visual objects and building information models</atitle><jtitle>Fire safety journal</jtitle><date>2017-07</date><risdate>2017</risdate><volume>91</volume><spage>1026</spage><epage>1034</epage><pages>1026-1034</pages><issn>0379-7112</issn><eissn>1873-7226</eissn><abstract>This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi-)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding.</abstract><cop>Lausanne</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.firesaf.2017.03.083</doi><tpages>9</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0379-7112
ispartof Fire safety journal, 2017-07, Vol.91, p.1026-1034
issn 0379-7112
1873-7226
language eng
recordid cdi_proquest_journals_1950623575
source Access via ScienceDirect (Elsevier)
subjects Building information modeling
Building information models
Building management systems
Buildings
Cameras
Computer vision
Data management
Fire analysis
Fire protection
Fires
Localization
Location estimation
Management tools
Multi-view sensing
Object recognition
Pattern recognition
Position (location)
Safety management
Semantic linking
Semantics
Situational awareness
Visual discrimination learning
Visual object recognition
Visual perception
Workflow
title Fireground location understanding by semantic linking of visual objects and building information models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T21%3A09%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fireground%20location%20understanding%20by%20semantic%20linking%20of%20visual%20objects%20and%20building%20information%20models&rft.jtitle=Fire%20safety%20journal&rft.au=Vandecasteele,%20Florian&rft.date=2017-07&rft.volume=91&rft.spage=1026&rft.epage=1034&rft.pages=1026-1034&rft.issn=0379-7112&rft.eissn=1873-7226&rft_id=info:doi/10.1016/j.firesaf.2017.03.083&rft_dat=%3Cproquest_cross%3E1950623575%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1950623575&rft_id=info:pmid/&rft_els_id=S0379711217301364&rfr_iscdi=true