Temporal Shift GAN for Large Scale Video Generation
Video generation models have become increasingly popular in the last few years, however the standard 2D architectures used today lack natural spatio-temporal modelling capabilities. In this paper, we present a network architecture for video generation that models spatio-temporal consistency without...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video generation models have become increasingly popular in the last few
years, however the standard 2D architectures used today lack natural
spatio-temporal modelling capabilities. In this paper, we present a network
architecture for video generation that models spatio-temporal consistency
without resorting to costly 3D architectures. The architecture facilitates
information exchange between neighboring time points, which improves the
temporal consistency of both the high level structure as well as the low-level
details of the generated frames. The approach achieves state-of-the-art
quantitative performance, as measured by the inception score on the UCF-101
dataset as well as better qualitative results. We also introduce a new
quantitative measure (S3) that uses downstream tasks for evaluation. Moreover,
we present a new multi-label dataset MaisToy, which enables us to evaluate the
generalization of the model. |
---|---|
DOI: | 10.48550/arxiv.2004.01823 |