Deep generative smoke simulator: connecting simulated and real data

We propose a novel generative adversarial architecture to generate realistic smoke sequences. Physically based smoke simulation methods are difficult to match with real-captured data since smoke is vulnerable to disturbance. In our work, we design a generator that takes into account the temporal mov...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Visual computer 2020-07, Vol.36 (7), p.1385-1399
Hauptverfasser: Wen, Jinghuan, Ma, Huimin, Luo, Xiong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a novel generative adversarial architecture to generate realistic smoke sequences. Physically based smoke simulation methods are difficult to match with real-captured data since smoke is vulnerable to disturbance. In our work, we design a generator that takes into account the temporal movement of smoke as well as detailed structures. With the help of convolutional neural networks and long short-term memory-based autoencoder, our generator can predict the future frames using temporal information while preserving details. We use generative adversarial networks to train the model on both simulated and real-captured data and propose a combined loss function that reflects both the physical laws and the data distributions. We also demonstrate a multi-phase training strategy that significantly speeds up convergence and increases stability of training on real-captured data. To test our approach, we set up experiments to capture real smoke sequences and show that our method can achieve realistic visual effects.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-019-01738-y