Boosting Visual Fidelity in Driving Simulations through Diffusion Models
Diffusion models have made substantial progress in facilitating image generation and editing. As the technology matures, we see its potential in the context of driving simulations to enhance the simulated experience. In this paper, we explore this potential through the introduction of a novel system...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Diffusion models have made substantial progress in facilitating image
generation and editing. As the technology matures, we see its potential in the
context of driving simulations to enhance the simulated experience. In this
paper, we explore this potential through the introduction of a novel system
designed to boost visual fidelity. Our system, DRIVE (Diffusion-based Realism
Improvement for Virtual Environments), leverages a diffusion model pipeline to
give a simulated environment a photorealistic view, with the flexibility to be
adapted for other applications. We conducted a preliminary user study to assess
the system's effectiveness in rendering realistic visuals and supporting
participants in performing driving tasks. Our work not only lays the groundwork
for future research on the integration of diffusion models in driving
simulations but also provides practical guidelines and best practices for their
application in this context. |
---|---|
DOI: | 10.48550/arxiv.2410.04214 |