DTSGAN: Learning Dynamic Textures via Spatiotemporal Generative Adversarial Network
Dynamic texture synthesis aims to generate sequences that are visually similar to a reference video texture and exhibit specific stationary properties in time. In this paper, we introduce a spatiotemporal generative adversarial network (DTSGAN) that can learn from a single dynamic texture by capturi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dynamic texture synthesis aims to generate sequences that are visually
similar to a reference video texture and exhibit specific stationary properties
in time. In this paper, we introduce a spatiotemporal generative adversarial
network (DTSGAN) that can learn from a single dynamic texture by capturing its
motion and content distribution. With the pipeline of DTSGAN, a new video
sequence is generated from the coarsest scale to the finest one. To avoid mode
collapse, we propose a novel strategy for data updates that helps improve the
diversity of generated results. Qualitative and quantitative experiments show
that our model is able to generate high quality dynamic textures and natural
motion. |
---|---|
DOI: | 10.48550/arxiv.2412.16948 |