CapDet: Unifying Dense Captioning and Open-World Detection Pretraining

Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing met...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Long, Yanxin, Wen, Youpeng, Han, Jianhua, Xu, Hang, Ren, Pengzhen, Zhang, Wei, Zhao, Shen, Liang, Xiaodan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Long, Yanxin
Wen, Youpeng
Han, Jianhua
Xu, Hang
Ren, Pengzhen
Zhang, Wei
Zhao, Shen
Liang, Xiaodan
description Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing methods and only the objects belonging to that space will be predicted. To introduce a "real" open-world detector, in this paper, we propose a novel method named CapDet to either predict under a given category list or directly generate the category of predicted bounding boxes. Specifically, we unify the open-world detection and dense caption tasks into a single yet effective framework by introducing an additional dense captioning head to generate the region-grounded captions. Besides, adding the captioning task will in turn benefit the generalization of detection performance since the captioning dataset covers more concepts. Experiment results show that by unifying the dense caption task, our CapDet has obtained significant performance improvements (e.g., +2.1% mAP on LVIS rare classes) over the baseline method on LVIS (1203 classes). Besides, our CapDet also achieves state-of-the-art performance on dense captioning tasks, e.g., 15.44% mAP on VG V1.2 and 13.98% on the VG-COCO dataset.
doi_str_mv 10.48550/arxiv.2303.02489
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_02489</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_02489</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-db5898449f085d98e05c9e11767e4e351e14e7dfe64efcd2bbc52f11051123b73</originalsourceid><addsrcrecordid>eNotj81qAjEUhbPpomgfoKvmBWaam59J4q6MVQuCLiwuh8zkRgI2DnEo9e11tKsD3zkc-Ah5BVZKoxR7d_kv_pZcMFEyLo19Jova9XMcZvQ7xXCJ6UDnmM5Ib3iIpzQClzzd9JiK_Skf_a0fsBs7us04ZBfH0ZQ8BXc848t_Tshu8bmrV8V6s_yqP9aFq7QtfKuMNVLawIzy1iBTnUUAXWmUKBQgSNQ-YCUxdJ63bad4AGAKgItWiwl5e9zeRZo-xx-XL80o1NyFxBWQn0Vc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CapDet: Unifying Dense Captioning and Open-World Detection Pretraining</title><source>arXiv.org</source><creator>Long, Yanxin ; Wen, Youpeng ; Han, Jianhua ; Xu, Hang ; Ren, Pengzhen ; Zhang, Wei ; Zhao, Shen ; Liang, Xiaodan</creator><creatorcontrib>Long, Yanxin ; Wen, Youpeng ; Han, Jianhua ; Xu, Hang ; Ren, Pengzhen ; Zhang, Wei ; Zhao, Shen ; Liang, Xiaodan</creatorcontrib><description>Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing methods and only the objects belonging to that space will be predicted. To introduce a "real" open-world detector, in this paper, we propose a novel method named CapDet to either predict under a given category list or directly generate the category of predicted bounding boxes. Specifically, we unify the open-world detection and dense caption tasks into a single yet effective framework by introducing an additional dense captioning head to generate the region-grounded captions. Besides, adding the captioning task will in turn benefit the generalization of detection performance since the captioning dataset covers more concepts. Experiment results show that by unifying the dense caption task, our CapDet has obtained significant performance improvements (e.g., +2.1% mAP on LVIS rare classes) over the baseline method on LVIS (1203 classes). Besides, our CapDet also achieves state-of-the-art performance on dense captioning tasks, e.g., 15.44% mAP on VG V1.2 and 13.98% on the VG-COCO dataset.</description><identifier>DOI: 10.48550/arxiv.2303.02489</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.02489$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.02489$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Long, Yanxin</creatorcontrib><creatorcontrib>Wen, Youpeng</creatorcontrib><creatorcontrib>Han, Jianhua</creatorcontrib><creatorcontrib>Xu, Hang</creatorcontrib><creatorcontrib>Ren, Pengzhen</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Zhao, Shen</creatorcontrib><creatorcontrib>Liang, Xiaodan</creatorcontrib><title>CapDet: Unifying Dense Captioning and Open-World Detection Pretraining</title><description>Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing methods and only the objects belonging to that space will be predicted. To introduce a "real" open-world detector, in this paper, we propose a novel method named CapDet to either predict under a given category list or directly generate the category of predicted bounding boxes. Specifically, we unify the open-world detection and dense caption tasks into a single yet effective framework by introducing an additional dense captioning head to generate the region-grounded captions. Besides, adding the captioning task will in turn benefit the generalization of detection performance since the captioning dataset covers more concepts. Experiment results show that by unifying the dense caption task, our CapDet has obtained significant performance improvements (e.g., +2.1% mAP on LVIS rare classes) over the baseline method on LVIS (1203 classes). Besides, our CapDet also achieves state-of-the-art performance on dense captioning tasks, e.g., 15.44% mAP on VG V1.2 and 13.98% on the VG-COCO dataset.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qAjEUhbPpomgfoKvmBWaam59J4q6MVQuCLiwuh8zkRgI2DnEo9e11tKsD3zkc-Ah5BVZKoxR7d_kv_pZcMFEyLo19Jova9XMcZvQ7xXCJ6UDnmM5Ib3iIpzQClzzd9JiK_Skf_a0fsBs7us04ZBfH0ZQ8BXc848t_Tshu8bmrV8V6s_yqP9aFq7QtfKuMNVLawIzy1iBTnUUAXWmUKBQgSNQ-YCUxdJ63bad4AGAKgItWiwl5e9zeRZo-xx-XL80o1NyFxBWQn0Vc</recordid><startdate>20230304</startdate><enddate>20230304</enddate><creator>Long, Yanxin</creator><creator>Wen, Youpeng</creator><creator>Han, Jianhua</creator><creator>Xu, Hang</creator><creator>Ren, Pengzhen</creator><creator>Zhang, Wei</creator><creator>Zhao, Shen</creator><creator>Liang, Xiaodan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230304</creationdate><title>CapDet: Unifying Dense Captioning and Open-World Detection Pretraining</title><author>Long, Yanxin ; Wen, Youpeng ; Han, Jianhua ; Xu, Hang ; Ren, Pengzhen ; Zhang, Wei ; Zhao, Shen ; Liang, Xiaodan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-db5898449f085d98e05c9e11767e4e351e14e7dfe64efcd2bbc52f11051123b73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Long, Yanxin</creatorcontrib><creatorcontrib>Wen, Youpeng</creatorcontrib><creatorcontrib>Han, Jianhua</creatorcontrib><creatorcontrib>Xu, Hang</creatorcontrib><creatorcontrib>Ren, Pengzhen</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Zhao, Shen</creatorcontrib><creatorcontrib>Liang, Xiaodan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Long, Yanxin</au><au>Wen, Youpeng</au><au>Han, Jianhua</au><au>Xu, Hang</au><au>Ren, Pengzhen</au><au>Zhang, Wei</au><au>Zhao, Shen</au><au>Liang, Xiaodan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CapDet: Unifying Dense Captioning and Open-World Detection Pretraining</atitle><date>2023-03-04</date><risdate>2023</risdate><abstract>Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing methods and only the objects belonging to that space will be predicted. To introduce a "real" open-world detector, in this paper, we propose a novel method named CapDet to either predict under a given category list or directly generate the category of predicted bounding boxes. Specifically, we unify the open-world detection and dense caption tasks into a single yet effective framework by introducing an additional dense captioning head to generate the region-grounded captions. Besides, adding the captioning task will in turn benefit the generalization of detection performance since the captioning dataset covers more concepts. Experiment results show that by unifying the dense caption task, our CapDet has obtained significant performance improvements (e.g., +2.1% mAP on LVIS rare classes) over the baseline method on LVIS (1203 classes). Besides, our CapDet also achieves state-of-the-art performance on dense captioning tasks, e.g., 15.44% mAP on VG V1.2 and 13.98% on the VG-COCO dataset.</abstract><doi>10.48550/arxiv.2303.02489</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.02489
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_02489
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title CapDet: Unifying Dense Captioning and Open-World Detection Pretraining
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T08%3A30%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CapDet:%20Unifying%20Dense%20Captioning%20and%20Open-World%20Detection%20Pretraining&rft.au=Long,%20Yanxin&rft.date=2023-03-04&rft_id=info:doi/10.48550/arxiv.2303.02489&rft_dat=%3Carxiv_GOX%3E2303_02489%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true