Towards Open-Vocabulary Video Instance Segmentation

Video Instance Segmentation (VIS) aims at segmenting and categorizing objects in videos from a closed set of training categories, lacking the generalization ability to handle novel categories in real-world videos. To address this limitation, we make the following three contributions. First, we intro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Haochen, Yan, Cilin, Wang, Shuai, Jiang, Xiaolong, Tang, XU, Hu, Yao, Xie, Weidi, Gavves, Efstratios
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Haochen
Yan, Cilin
Wang, Shuai
Jiang, Xiaolong
Tang, XU
Hu, Yao
Xie, Weidi
Gavves, Efstratios
description Video Instance Segmentation (VIS) aims at segmenting and categorizing objects in videos from a closed set of training categories, lacking the generalization ability to handle novel categories in real-world videos. To address this limitation, we make the following three contributions. First, we introduce the novel task of Open-Vocabulary Video Instance Segmentation, which aims to simultaneously segment, track, and classify objects in videos from open-set categories, including novel categories unseen during training. Second, to benchmark Open-Vocabulary VIS, we collect a Large-Vocabulary Video Instance Segmentation dataset (LV-VIS), that contains well-annotated objects from 1,196 diverse categories, significantly surpassing the category size of existing datasets by more than one order of magnitude. Third, we propose an efficient Memory-Induced Transformer architecture, OV2Seg, to first achieve Open-Vocabulary VIS in an end-to-end manner with near real-time inference speed. Extensive experiments on LV-VIS and four existing VIS datasets demonstrate the strong zero-shot generalization ability of OV2Seg on novel categories. The dataset and code are released here https://github.com/haochenheheda/LVVIS.
doi_str_mv 10.48550/arxiv.2304.01715
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_01715</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_01715</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-1332cb647aa3350186cd95126c7f04b0c4884e65fd72c817fe2d5312d07052fa3</originalsourceid><addsrcrecordid>eNotzrtuwjAUgGEvHSraB-hEXiDh2MfHNiNCtEVCYiBijU58QZHAQUl6e_u2tNO__fqEeJJQaUcECx4-u_dKIegKpJV0L7DuP3gIY7G_xlwee8_t25mHr-LYhdgX2zxOnH0sDvF0iXniqevzg7hLfB7j439non7e1OvXcrd_2a5Xu5KNpVIiKt8abZkRCaQzPixJKuNtAt2C187paCgFq7yTNkUVCKUKYIFUYpyJ-d_2pm6uQ3f5gTW_-uamx2_dqT4x</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Open-Vocabulary Video Instance Segmentation</title><source>arXiv.org</source><creator>Wang, Haochen ; Yan, Cilin ; Wang, Shuai ; Jiang, Xiaolong ; Tang, XU ; Hu, Yao ; Xie, Weidi ; Gavves, Efstratios</creator><creatorcontrib>Wang, Haochen ; Yan, Cilin ; Wang, Shuai ; Jiang, Xiaolong ; Tang, XU ; Hu, Yao ; Xie, Weidi ; Gavves, Efstratios</creatorcontrib><description>Video Instance Segmentation (VIS) aims at segmenting and categorizing objects in videos from a closed set of training categories, lacking the generalization ability to handle novel categories in real-world videos. To address this limitation, we make the following three contributions. First, we introduce the novel task of Open-Vocabulary Video Instance Segmentation, which aims to simultaneously segment, track, and classify objects in videos from open-set categories, including novel categories unseen during training. Second, to benchmark Open-Vocabulary VIS, we collect a Large-Vocabulary Video Instance Segmentation dataset (LV-VIS), that contains well-annotated objects from 1,196 diverse categories, significantly surpassing the category size of existing datasets by more than one order of magnitude. Third, we propose an efficient Memory-Induced Transformer architecture, OV2Seg, to first achieve Open-Vocabulary VIS in an end-to-end manner with near real-time inference speed. Extensive experiments on LV-VIS and four existing VIS datasets demonstrate the strong zero-shot generalization ability of OV2Seg on novel categories. The dataset and code are released here https://github.com/haochenheheda/LVVIS.</description><identifier>DOI: 10.48550/arxiv.2304.01715</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.01715$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.01715$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Haochen</creatorcontrib><creatorcontrib>Yan, Cilin</creatorcontrib><creatorcontrib>Wang, Shuai</creatorcontrib><creatorcontrib>Jiang, Xiaolong</creatorcontrib><creatorcontrib>Tang, XU</creatorcontrib><creatorcontrib>Hu, Yao</creatorcontrib><creatorcontrib>Xie, Weidi</creatorcontrib><creatorcontrib>Gavves, Efstratios</creatorcontrib><title>Towards Open-Vocabulary Video Instance Segmentation</title><description>Video Instance Segmentation (VIS) aims at segmenting and categorizing objects in videos from a closed set of training categories, lacking the generalization ability to handle novel categories in real-world videos. To address this limitation, we make the following three contributions. First, we introduce the novel task of Open-Vocabulary Video Instance Segmentation, which aims to simultaneously segment, track, and classify objects in videos from open-set categories, including novel categories unseen during training. Second, to benchmark Open-Vocabulary VIS, we collect a Large-Vocabulary Video Instance Segmentation dataset (LV-VIS), that contains well-annotated objects from 1,196 diverse categories, significantly surpassing the category size of existing datasets by more than one order of magnitude. Third, we propose an efficient Memory-Induced Transformer architecture, OV2Seg, to first achieve Open-Vocabulary VIS in an end-to-end manner with near real-time inference speed. Extensive experiments on LV-VIS and four existing VIS datasets demonstrate the strong zero-shot generalization ability of OV2Seg on novel categories. The dataset and code are released here https://github.com/haochenheheda/LVVIS.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrtuwjAUgGEvHSraB-hEXiDh2MfHNiNCtEVCYiBijU58QZHAQUl6e_u2tNO__fqEeJJQaUcECx4-u_dKIegKpJV0L7DuP3gIY7G_xlwee8_t25mHr-LYhdgX2zxOnH0sDvF0iXniqevzg7hLfB7j439non7e1OvXcrd_2a5Xu5KNpVIiKt8abZkRCaQzPixJKuNtAt2C187paCgFq7yTNkUVCKUKYIFUYpyJ-d_2pm6uQ3f5gTW_-uamx2_dqT4x</recordid><startdate>20230404</startdate><enddate>20230404</enddate><creator>Wang, Haochen</creator><creator>Yan, Cilin</creator><creator>Wang, Shuai</creator><creator>Jiang, Xiaolong</creator><creator>Tang, XU</creator><creator>Hu, Yao</creator><creator>Xie, Weidi</creator><creator>Gavves, Efstratios</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230404</creationdate><title>Towards Open-Vocabulary Video Instance Segmentation</title><author>Wang, Haochen ; Yan, Cilin ; Wang, Shuai ; Jiang, Xiaolong ; Tang, XU ; Hu, Yao ; Xie, Weidi ; Gavves, Efstratios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-1332cb647aa3350186cd95126c7f04b0c4884e65fd72c817fe2d5312d07052fa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Haochen</creatorcontrib><creatorcontrib>Yan, Cilin</creatorcontrib><creatorcontrib>Wang, Shuai</creatorcontrib><creatorcontrib>Jiang, Xiaolong</creatorcontrib><creatorcontrib>Tang, XU</creatorcontrib><creatorcontrib>Hu, Yao</creatorcontrib><creatorcontrib>Xie, Weidi</creatorcontrib><creatorcontrib>Gavves, Efstratios</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Haochen</au><au>Yan, Cilin</au><au>Wang, Shuai</au><au>Jiang, Xiaolong</au><au>Tang, XU</au><au>Hu, Yao</au><au>Xie, Weidi</au><au>Gavves, Efstratios</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Open-Vocabulary Video Instance Segmentation</atitle><date>2023-04-04</date><risdate>2023</risdate><abstract>Video Instance Segmentation (VIS) aims at segmenting and categorizing objects in videos from a closed set of training categories, lacking the generalization ability to handle novel categories in real-world videos. To address this limitation, we make the following three contributions. First, we introduce the novel task of Open-Vocabulary Video Instance Segmentation, which aims to simultaneously segment, track, and classify objects in videos from open-set categories, including novel categories unseen during training. Second, to benchmark Open-Vocabulary VIS, we collect a Large-Vocabulary Video Instance Segmentation dataset (LV-VIS), that contains well-annotated objects from 1,196 diverse categories, significantly surpassing the category size of existing datasets by more than one order of magnitude. Third, we propose an efficient Memory-Induced Transformer architecture, OV2Seg, to first achieve Open-Vocabulary VIS in an end-to-end manner with near real-time inference speed. Extensive experiments on LV-VIS and four existing VIS datasets demonstrate the strong zero-shot generalization ability of OV2Seg on novel categories. The dataset and code are released here https://github.com/haochenheheda/LVVIS.</abstract><doi>10.48550/arxiv.2304.01715</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.01715
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_01715
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Towards Open-Vocabulary Video Instance Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T19%3A56%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Open-Vocabulary%20Video%20Instance%20Segmentation&rft.au=Wang,%20Haochen&rft.date=2023-04-04&rft_id=info:doi/10.48550/arxiv.2304.01715&rft_dat=%3Carxiv_GOX%3E2304_01715%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true