CVPR 2023 Text Guided Video Editing Competition
Humans watch more than a billion hours of video per day. Most of this video was edited manually, which is a tedious process. However, AI-enabled video-generation and video-editing is on the rise. Building on text-to-image models like Stable Diffusion and Imagen, generative AI has improved dramatical...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wu, Jay Zhangjie Li, Xiuyu Gao, Difei Dong, Zhen Bai, Jinbin Singh, Aishani Xiang, Xiaoyu Li, Youzeng Huang, Zuwei Sun, Yuanxi He, Rui Hu, Feng Hu, Junhua Huang, Hai Zhu, Hanyu Cheng, Xu Tang, Jie Shou, Mike Zheng Keutzer, Kurt Iandola, Forrest |
description | Humans watch more than a billion hours of video per day. Most of this video
was edited manually, which is a tedious process. However, AI-enabled
video-generation and video-editing is on the rise. Building on text-to-image
models like Stable Diffusion and Imagen, generative AI has improved
dramatically on video tasks. But it's hard to evaluate progress in these video
tasks because there is no standard benchmark. So, we propose a new dataset for
text-guided video editing (TGVE), and we run a competition at CVPR to evaluate
models on our TGVE dataset. In this paper we present a retrospective on the
competition and describe the winning method. The competition dataset is
available at https://sites.google.com/view/loveucvpr23/track4. |
doi_str_mv | 10.48550/arxiv.2310.16003 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_16003</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_16003</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-628bb206c4187b1cf13b28633cdb2176b9f22cd55a4057489fdb105ef761f4d53</originalsourceid><addsrcrecordid>eNotjsuKwjAYhbOZhTg-gKvJC7Qm_5-by6F4A0EZituSNIkEtJVOFeftp1425xy-xeEjZMpZLoyUbGa7e7rlgAPgijEckVlx2P9QYIC0DPeerq7JB08PQ7Z04VOfmiMt2vMl9MNum0_yEe3pN0zePSblclEW62y7W22K721mlcZMgXEOmKoFN9rxOnJ0YBRi7R1wrdw8AtReSiuY1MLMo3ecyRC14lF4iWPy9bp9GleXLp1t91c9zKunOf4DfSk7MA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CVPR 2023 Text Guided Video Editing Competition</title><source>arXiv.org</source><creator>Wu, Jay Zhangjie ; Li, Xiuyu ; Gao, Difei ; Dong, Zhen ; Bai, Jinbin ; Singh, Aishani ; Xiang, Xiaoyu ; Li, Youzeng ; Huang, Zuwei ; Sun, Yuanxi ; He, Rui ; Hu, Feng ; Hu, Junhua ; Huang, Hai ; Zhu, Hanyu ; Cheng, Xu ; Tang, Jie ; Shou, Mike Zheng ; Keutzer, Kurt ; Iandola, Forrest</creator><creatorcontrib>Wu, Jay Zhangjie ; Li, Xiuyu ; Gao, Difei ; Dong, Zhen ; Bai, Jinbin ; Singh, Aishani ; Xiang, Xiaoyu ; Li, Youzeng ; Huang, Zuwei ; Sun, Yuanxi ; He, Rui ; Hu, Feng ; Hu, Junhua ; Huang, Hai ; Zhu, Hanyu ; Cheng, Xu ; Tang, Jie ; Shou, Mike Zheng ; Keutzer, Kurt ; Iandola, Forrest</creatorcontrib><description>Humans watch more than a billion hours of video per day. Most of this video
was edited manually, which is a tedious process. However, AI-enabled
video-generation and video-editing is on the rise. Building on text-to-image
models like Stable Diffusion and Imagen, generative AI has improved
dramatically on video tasks. But it's hard to evaluate progress in these video
tasks because there is no standard benchmark. So, we propose a new dataset for
text-guided video editing (TGVE), and we run a competition at CVPR to evaluate
models on our TGVE dataset. In this paper we present a retrospective on the
competition and describe the winning method. The competition dataset is
available at https://sites.google.com/view/loveucvpr23/track4.</description><identifier>DOI: 10.48550/arxiv.2310.16003</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.16003$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.16003$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Jay Zhangjie</creatorcontrib><creatorcontrib>Li, Xiuyu</creatorcontrib><creatorcontrib>Gao, Difei</creatorcontrib><creatorcontrib>Dong, Zhen</creatorcontrib><creatorcontrib>Bai, Jinbin</creatorcontrib><creatorcontrib>Singh, Aishani</creatorcontrib><creatorcontrib>Xiang, Xiaoyu</creatorcontrib><creatorcontrib>Li, Youzeng</creatorcontrib><creatorcontrib>Huang, Zuwei</creatorcontrib><creatorcontrib>Sun, Yuanxi</creatorcontrib><creatorcontrib>He, Rui</creatorcontrib><creatorcontrib>Hu, Feng</creatorcontrib><creatorcontrib>Hu, Junhua</creatorcontrib><creatorcontrib>Huang, Hai</creatorcontrib><creatorcontrib>Zhu, Hanyu</creatorcontrib><creatorcontrib>Cheng, Xu</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Shou, Mike Zheng</creatorcontrib><creatorcontrib>Keutzer, Kurt</creatorcontrib><creatorcontrib>Iandola, Forrest</creatorcontrib><title>CVPR 2023 Text Guided Video Editing Competition</title><description>Humans watch more than a billion hours of video per day. Most of this video
was edited manually, which is a tedious process. However, AI-enabled
video-generation and video-editing is on the rise. Building on text-to-image
models like Stable Diffusion and Imagen, generative AI has improved
dramatically on video tasks. But it's hard to evaluate progress in these video
tasks because there is no standard benchmark. So, we propose a new dataset for
text-guided video editing (TGVE), and we run a competition at CVPR to evaluate
models on our TGVE dataset. In this paper we present a retrospective on the
competition and describe the winning method. The competition dataset is
available at https://sites.google.com/view/loveucvpr23/track4.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjsuKwjAYhbOZhTg-gKvJC7Qm_5-by6F4A0EZituSNIkEtJVOFeftp1425xy-xeEjZMpZLoyUbGa7e7rlgAPgijEckVlx2P9QYIC0DPeerq7JB08PQ7Z04VOfmiMt2vMl9MNum0_yEe3pN0zePSblclEW62y7W22K721mlcZMgXEOmKoFN9rxOnJ0YBRi7R1wrdw8AtReSiuY1MLMo3ecyRC14lF4iWPy9bp9GleXLp1t91c9zKunOf4DfSk7MA</recordid><startdate>20231024</startdate><enddate>20231024</enddate><creator>Wu, Jay Zhangjie</creator><creator>Li, Xiuyu</creator><creator>Gao, Difei</creator><creator>Dong, Zhen</creator><creator>Bai, Jinbin</creator><creator>Singh, Aishani</creator><creator>Xiang, Xiaoyu</creator><creator>Li, Youzeng</creator><creator>Huang, Zuwei</creator><creator>Sun, Yuanxi</creator><creator>He, Rui</creator><creator>Hu, Feng</creator><creator>Hu, Junhua</creator><creator>Huang, Hai</creator><creator>Zhu, Hanyu</creator><creator>Cheng, Xu</creator><creator>Tang, Jie</creator><creator>Shou, Mike Zheng</creator><creator>Keutzer, Kurt</creator><creator>Iandola, Forrest</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231024</creationdate><title>CVPR 2023 Text Guided Video Editing Competition</title><author>Wu, Jay Zhangjie ; Li, Xiuyu ; Gao, Difei ; Dong, Zhen ; Bai, Jinbin ; Singh, Aishani ; Xiang, Xiaoyu ; Li, Youzeng ; Huang, Zuwei ; Sun, Yuanxi ; He, Rui ; Hu, Feng ; Hu, Junhua ; Huang, Hai ; Zhu, Hanyu ; Cheng, Xu ; Tang, Jie ; Shou, Mike Zheng ; Keutzer, Kurt ; Iandola, Forrest</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-628bb206c4187b1cf13b28633cdb2176b9f22cd55a4057489fdb105ef761f4d53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Jay Zhangjie</creatorcontrib><creatorcontrib>Li, Xiuyu</creatorcontrib><creatorcontrib>Gao, Difei</creatorcontrib><creatorcontrib>Dong, Zhen</creatorcontrib><creatorcontrib>Bai, Jinbin</creatorcontrib><creatorcontrib>Singh, Aishani</creatorcontrib><creatorcontrib>Xiang, Xiaoyu</creatorcontrib><creatorcontrib>Li, Youzeng</creatorcontrib><creatorcontrib>Huang, Zuwei</creatorcontrib><creatorcontrib>Sun, Yuanxi</creatorcontrib><creatorcontrib>He, Rui</creatorcontrib><creatorcontrib>Hu, Feng</creatorcontrib><creatorcontrib>Hu, Junhua</creatorcontrib><creatorcontrib>Huang, Hai</creatorcontrib><creatorcontrib>Zhu, Hanyu</creatorcontrib><creatorcontrib>Cheng, Xu</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Shou, Mike Zheng</creatorcontrib><creatorcontrib>Keutzer, Kurt</creatorcontrib><creatorcontrib>Iandola, Forrest</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Jay Zhangjie</au><au>Li, Xiuyu</au><au>Gao, Difei</au><au>Dong, Zhen</au><au>Bai, Jinbin</au><au>Singh, Aishani</au><au>Xiang, Xiaoyu</au><au>Li, Youzeng</au><au>Huang, Zuwei</au><au>Sun, Yuanxi</au><au>He, Rui</au><au>Hu, Feng</au><au>Hu, Junhua</au><au>Huang, Hai</au><au>Zhu, Hanyu</au><au>Cheng, Xu</au><au>Tang, Jie</au><au>Shou, Mike Zheng</au><au>Keutzer, Kurt</au><au>Iandola, Forrest</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CVPR 2023 Text Guided Video Editing Competition</atitle><date>2023-10-24</date><risdate>2023</risdate><abstract>Humans watch more than a billion hours of video per day. Most of this video
was edited manually, which is a tedious process. However, AI-enabled
video-generation and video-editing is on the rise. Building on text-to-image
models like Stable Diffusion and Imagen, generative AI has improved
dramatically on video tasks. But it's hard to evaluate progress in these video
tasks because there is no standard benchmark. So, we propose a new dataset for
text-guided video editing (TGVE), and we run a competition at CVPR to evaluate
models on our TGVE dataset. In this paper we present a retrospective on the
competition and describe the winning method. The competition dataset is
available at https://sites.google.com/view/loveucvpr23/track4.</abstract><doi>10.48550/arxiv.2310.16003</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2310.16003 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2310_16003 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | CVPR 2023 Text Guided Video Editing Competition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T03%3A07%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CVPR%202023%20Text%20Guided%20Video%20Editing%20Competition&rft.au=Wu,%20Jay%20Zhangjie&rft.date=2023-10-24&rft_id=info:doi/10.48550/arxiv.2310.16003&rft_dat=%3Carxiv_GOX%3E2310_16003%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |