Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data
We present a simple and efficient method to leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images. While significant recent advances have resulted in improved techniques for damage assessment using aerial...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a simple and efficient method to leverage emerging text-to-image
generative models in creating large-scale synthetic supervision for the task of
damage assessment from aerial images. While significant recent advances have
resulted in improved techniques for damage assessment using aerial or satellite
imagery, they still suffer from poor robustness to domains where manual labeled
data is unavailable, directly impacting post-disaster humanitarian assistance
in such under-resourced geographies. Our contribution towards improving domain
robustness in this scenario is two-fold. Firstly, we leverage the text-guided
mask-based image editing capabilities of generative models and build an
efficient and easily scalable pipeline to generate thousands of post-disaster
images from low-resource domains. Secondly, we propose a simple two-stage
training approach to train robust models while using manual supervision from
different source domains along with the generated synthetic target domain data.
We validate the strength of our proposed framework under cross-geography domain
transfer setting from xBD and SKAI images in both single-source and
multi-source settings, achieving significant improvements over a source-only
baseline in each case. |
---|---|
DOI: | 10.48550/arxiv.2405.13779 |