Text2Street: Controllable Text-to-image Generation for Street Views
Text-to-image generation has made remarkable progress with the emergence of diffusion models. However, it is still a difficult task to generate images for street views based on text, mainly because the road topology of street scenes is complex, the traffic status is diverse and the weather condition...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text-to-image generation has made remarkable progress with the emergence of
diffusion models. However, it is still a difficult task to generate images for
street views based on text, mainly because the road topology of street scenes
is complex, the traffic status is diverse and the weather condition is various,
which makes conventional text-to-image models difficult to deal with. To
address these challenges, we propose a novel controllable text-to-image
framework, named \textbf{Text2Street}. In the framework, we first introduce the
lane-aware road topology generator, which achieves text-to-map generation with
the accurate road structure and lane lines armed with the counting adapter,
realizing the controllable road topology generation. Then, the position-based
object layout generator is proposed to obtain text-to-layout generation through
an object-level bounding box diffusion strategy, realizing the controllable
traffic object layout generation. Finally, the multiple control image generator
is designed to integrate the road topology, object layout and weather
description to realize controllable street-view image generation. Extensive
experiments show that the proposed approach achieves controllable street-view
text-to-image generation and validates the effectiveness of the Text2Street
framework for street views. |
---|---|
DOI: | 10.48550/arxiv.2402.04504 |