Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis
Contemporary models for generating images show remarkable quality and versatility. Swayed by these advantages, the research community repurposes them to generate videos. Since video content is highly redundant, we argue that naively bringing advances of image models to the video generation domain re...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Menapace, Willi Siarohin, Aliaksandr Skorokhodov, Ivan Deyneka, Ekaterina Chen, Tsai-Shien Kag, Anil Fang, Yuwei Stoliar, Aleksei Ricci, Elisa Ren, Jian Tulyakov, Sergey |
description | Contemporary models for generating images show remarkable quality and
versatility. Swayed by these advantages, the research community repurposes them
to generate videos. Since video content is highly redundant, we argue that
naively bringing advances of image models to the video generation domain
reduces motion fidelity, visual quality and impairs scalability. In this work,
we build Snap Video, a video-first model that systematically addresses these
challenges. To do that, we first extend the EDM framework to take into account
spatially and temporally redundant pixels and naturally support video
generation. Second, we show that a U-Net - a workhorse behind image generation
- scales poorly when generating videos, requiring significant computational
overhead. Hence, we propose a new transformer-based architecture that trains
3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us
to efficiently train a text-to-video model with billions of parameters for the
first time, reach state-of-the-art results on a number of benchmarks, and
generate videos with substantially higher quality, temporal consistency, and
motion complexity. The user studies showed that our model was favored by a
large margin over the most recent methods. See our website at
https://snap-research.github.io/snapvideo/. |
doi_str_mv | 10.48550/arxiv.2402.14797 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_14797</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_14797</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-6347fd6afc93e6d039a3a887cba9e6accb126049eacac035bd9bc7b5f494fe943</originalsourceid><addsrcrecordid>eNotz81KxDAYheFsXMiMF-DK3EBq2qRJMzsZ_GXARYvb8jX5goH-kQSZuXu1unpX58BDyG3JC9nUNb-HeA5fRSV5VZRSG31N3toZVvoRHC4H2loY0dF2hRyWjNO6RBhpF2FOfokTxkR_Sjs8Z5YXtq1oe5nzJ6aQ9uTKw5jw5r870j09dscXdnp_fj0-nBgorZkSUnunwFsjUDkuDAhoGm0HMKjA2qGsFJcGwYLloh6cGaweai-N9Gik2JG7v9sN068xTBAv_S-q31DiG6FXSL0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis</title><source>arXiv.org</source><creator>Menapace, Willi ; Siarohin, Aliaksandr ; Skorokhodov, Ivan ; Deyneka, Ekaterina ; Chen, Tsai-Shien ; Kag, Anil ; Fang, Yuwei ; Stoliar, Aleksei ; Ricci, Elisa ; Ren, Jian ; Tulyakov, Sergey</creator><creatorcontrib>Menapace, Willi ; Siarohin, Aliaksandr ; Skorokhodov, Ivan ; Deyneka, Ekaterina ; Chen, Tsai-Shien ; Kag, Anil ; Fang, Yuwei ; Stoliar, Aleksei ; Ricci, Elisa ; Ren, Jian ; Tulyakov, Sergey</creatorcontrib><description>Contemporary models for generating images show remarkable quality and
versatility. Swayed by these advantages, the research community repurposes them
to generate videos. Since video content is highly redundant, we argue that
naively bringing advances of image models to the video generation domain
reduces motion fidelity, visual quality and impairs scalability. In this work,
we build Snap Video, a video-first model that systematically addresses these
challenges. To do that, we first extend the EDM framework to take into account
spatially and temporally redundant pixels and naturally support video
generation. Second, we show that a U-Net - a workhorse behind image generation
- scales poorly when generating videos, requiring significant computational
overhead. Hence, we propose a new transformer-based architecture that trains
3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us
to efficiently train a text-to-video model with billions of parameters for the
first time, reach state-of-the-art results on a number of benchmarks, and
generate videos with substantially higher quality, temporal consistency, and
motion complexity. The user studies showed that our model was favored by a
large margin over the most recent methods. See our website at
https://snap-research.github.io/snapvideo/.</description><identifier>DOI: 10.48550/arxiv.2402.14797</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.14797$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.14797$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Menapace, Willi</creatorcontrib><creatorcontrib>Siarohin, Aliaksandr</creatorcontrib><creatorcontrib>Skorokhodov, Ivan</creatorcontrib><creatorcontrib>Deyneka, Ekaterina</creatorcontrib><creatorcontrib>Chen, Tsai-Shien</creatorcontrib><creatorcontrib>Kag, Anil</creatorcontrib><creatorcontrib>Fang, Yuwei</creatorcontrib><creatorcontrib>Stoliar, Aleksei</creatorcontrib><creatorcontrib>Ricci, Elisa</creatorcontrib><creatorcontrib>Ren, Jian</creatorcontrib><creatorcontrib>Tulyakov, Sergey</creatorcontrib><title>Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis</title><description>Contemporary models for generating images show remarkable quality and
versatility. Swayed by these advantages, the research community repurposes them
to generate videos. Since video content is highly redundant, we argue that
naively bringing advances of image models to the video generation domain
reduces motion fidelity, visual quality and impairs scalability. In this work,
we build Snap Video, a video-first model that systematically addresses these
challenges. To do that, we first extend the EDM framework to take into account
spatially and temporally redundant pixels and naturally support video
generation. Second, we show that a U-Net - a workhorse behind image generation
- scales poorly when generating videos, requiring significant computational
overhead. Hence, we propose a new transformer-based architecture that trains
3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us
to efficiently train a text-to-video model with billions of parameters for the
first time, reach state-of-the-art results on a number of benchmarks, and
generate videos with substantially higher quality, temporal consistency, and
motion complexity. The user studies showed that our model was favored by a
large margin over the most recent methods. See our website at
https://snap-research.github.io/snapvideo/.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KxDAYheFsXMiMF-DK3EBq2qRJMzsZ_GXARYvb8jX5goH-kQSZuXu1unpX58BDyG3JC9nUNb-HeA5fRSV5VZRSG31N3toZVvoRHC4H2loY0dF2hRyWjNO6RBhpF2FOfokTxkR_Sjs8Z5YXtq1oe5nzJ6aQ9uTKw5jw5r870j09dscXdnp_fj0-nBgorZkSUnunwFsjUDkuDAhoGm0HMKjA2qGsFJcGwYLloh6cGaweai-N9Gik2JG7v9sN068xTBAv_S-q31DiG6FXSL0</recordid><startdate>20240222</startdate><enddate>20240222</enddate><creator>Menapace, Willi</creator><creator>Siarohin, Aliaksandr</creator><creator>Skorokhodov, Ivan</creator><creator>Deyneka, Ekaterina</creator><creator>Chen, Tsai-Shien</creator><creator>Kag, Anil</creator><creator>Fang, Yuwei</creator><creator>Stoliar, Aleksei</creator><creator>Ricci, Elisa</creator><creator>Ren, Jian</creator><creator>Tulyakov, Sergey</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240222</creationdate><title>Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis</title><author>Menapace, Willi ; Siarohin, Aliaksandr ; Skorokhodov, Ivan ; Deyneka, Ekaterina ; Chen, Tsai-Shien ; Kag, Anil ; Fang, Yuwei ; Stoliar, Aleksei ; Ricci, Elisa ; Ren, Jian ; Tulyakov, Sergey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-6347fd6afc93e6d039a3a887cba9e6accb126049eacac035bd9bc7b5f494fe943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Menapace, Willi</creatorcontrib><creatorcontrib>Siarohin, Aliaksandr</creatorcontrib><creatorcontrib>Skorokhodov, Ivan</creatorcontrib><creatorcontrib>Deyneka, Ekaterina</creatorcontrib><creatorcontrib>Chen, Tsai-Shien</creatorcontrib><creatorcontrib>Kag, Anil</creatorcontrib><creatorcontrib>Fang, Yuwei</creatorcontrib><creatorcontrib>Stoliar, Aleksei</creatorcontrib><creatorcontrib>Ricci, Elisa</creatorcontrib><creatorcontrib>Ren, Jian</creatorcontrib><creatorcontrib>Tulyakov, Sergey</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Menapace, Willi</au><au>Siarohin, Aliaksandr</au><au>Skorokhodov, Ivan</au><au>Deyneka, Ekaterina</au><au>Chen, Tsai-Shien</au><au>Kag, Anil</au><au>Fang, Yuwei</au><au>Stoliar, Aleksei</au><au>Ricci, Elisa</au><au>Ren, Jian</au><au>Tulyakov, Sergey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis</atitle><date>2024-02-22</date><risdate>2024</risdate><abstract>Contemporary models for generating images show remarkable quality and
versatility. Swayed by these advantages, the research community repurposes them
to generate videos. Since video content is highly redundant, we argue that
naively bringing advances of image models to the video generation domain
reduces motion fidelity, visual quality and impairs scalability. In this work,
we build Snap Video, a video-first model that systematically addresses these
challenges. To do that, we first extend the EDM framework to take into account
spatially and temporally redundant pixels and naturally support video
generation. Second, we show that a U-Net - a workhorse behind image generation
- scales poorly when generating videos, requiring significant computational
overhead. Hence, we propose a new transformer-based architecture that trains
3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us
to efficiently train a text-to-video model with billions of parameters for the
first time, reach state-of-the-art results on a number of benchmarks, and
generate videos with substantially higher quality, temporal consistency, and
motion complexity. The user studies showed that our model was favored by a
large margin over the most recent methods. See our website at
https://snap-research.github.io/snapvideo/.</abstract><doi>10.48550/arxiv.2402.14797</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.14797 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_14797 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
title | Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T01%3A25%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Snap%20Video:%20Scaled%20Spatiotemporal%20Transformers%20for%20Text-to-Video%20Synthesis&rft.au=Menapace,%20Willi&rft.date=2024-02-22&rft_id=info:doi/10.48550/arxiv.2402.14797&rft_dat=%3Carxiv_GOX%3E2402_14797%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |