Semantic-aware Network for Aerial-to-Ground Image Synthesis
Aerial-to-ground image synthesis is an emerging and challenging problem that aims to synthesize a ground image from an aerial image. Due to the highly different layout and object representation between the aerial and ground images, existing approaches usually fail to transfer the components of the a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Aerial-to-ground image synthesis is an emerging and challenging problem that
aims to synthesize a ground image from an aerial image. Due to the highly
different layout and object representation between the aerial and ground
images, existing approaches usually fail to transfer the components of the
aerial scene into the ground scene. In this paper, we propose a novel framework
to explore the challenges by imposing enhanced structural alignment and
semantic awareness. We introduce a novel semantic-attentive feature
transformation module that allows to reconstruct the complex geographic
structures by aligning the aerial feature to the ground layout. Furthermore, we
propose semantic-aware loss functions by leveraging a pre-trained segmentation
network. The network is enforced to synthesize realistic objects across various
classes by separately calculating losses for different classes and balancing
them. Extensive experiments including comparisons with previous methods and
ablation studies show the effectiveness of the proposed framework both
qualitatively and quantitatively. |
---|---|
DOI: | 10.48550/arxiv.2308.06945 |