Computer Vision-Enabled Roof Subassembly Damage Detection from Hurricanes Using Aerial Reconnaissance Imagery
AbstractCoastal communities are increasingly vulnerable to devastating losses caused by extreme climatological events such as hurricanes. As risk escalates, an urgent need arises to accelerate learning from these disasters. Investments in postdisaster data collection have yielded comprehensive image...
Gespeichert in:
Veröffentlicht in: | Natural hazards review 2025-02, Vol.26 (1) |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AbstractCoastal communities are increasingly vulnerable to devastating losses caused by extreme climatological events such as hurricanes. As risk escalates, an urgent need arises to accelerate learning from these disasters. Investments in postdisaster data collection have yielded comprehensive imagery databases that, when coupled with breakthroughs in computer vision algorithms, present new opportunities for automated damage assessments. Research in this area has been primarily directed toward assigning building-level damage ratings from at-a-distance imagery or localizing highly granular damages from up-close imagery. This study introduces a workflow to quantify granular subassembly damages using at-a-distance imagery, focusing on residential roof subassemblies most vulnerable to hurricane winds: the roof cover, substrate, and framing. The workflow optimizes computational resources by sequencing aerial images through two classification models before feeding into a semantic segmentation model to quantify damage on a HAZUS-MH compatible scale. To test the performance of this workflow and the influence of image quality, we deployed the models using a sample of 373 single-family homes in Calcasieu Parish, Louisiana—a community heavily impacted by Hurricane Laura in August 2020. We explored differences in ground truth damage data from homeowners and engineers, opting for the latter because it is less likely to factor in interior and content losses not visible from imagery. The results demonstrate the potential to advance computer vision techniques for the quantification of granular damages from reconnaissance imagery. Although models tend to classify subassembly damage states in high-resolution images with moderate accuracy, accuracy decreases with the damage level likely because more severe damage states manifest as less structured images. This may suggest the need to better refine the features distinguishing more severe damage and expand training sets with a wider variety of severe damage images encompassing a broader range of disorganization. Additionally, limiting the number of classes in segmentation tasks can lead to more accurate results. |
---|---|
ISSN: | 1527-6988 1527-6996 |
DOI: | 10.1061/NHREFO.NHENG-2278 |