CapDet: Unifying Dense Captioning and Open-World Detection Pretraining
Benefiting from large-scale vision-language pre-training on image-text pairs, open-world detection methods have shown superior generalization ability under the zero-shot or few-shot detection settings. However, a pre-defined category space is still required during the inference stage of existing met...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Benefiting from large-scale vision-language pre-training on image-text pairs,
open-world detection methods have shown superior generalization ability under
the zero-shot or few-shot detection settings. However, a pre-defined category
space is still required during the inference stage of existing methods and only
the objects belonging to that space will be predicted. To introduce a "real"
open-world detector, in this paper, we propose a novel method named CapDet to
either predict under a given category list or directly generate the category of
predicted bounding boxes. Specifically, we unify the open-world detection and
dense caption tasks into a single yet effective framework by introducing an
additional dense captioning head to generate the region-grounded captions.
Besides, adding the captioning task will in turn benefit the generalization of
detection performance since the captioning dataset covers more concepts.
Experiment results show that by unifying the dense caption task, our CapDet has
obtained significant performance improvements (e.g., +2.1% mAP on LVIS rare
classes) over the baseline method on LVIS (1203 classes). Besides, our CapDet
also achieves state-of-the-art performance on dense captioning tasks, e.g.,
15.44% mAP on VG V1.2 and 13.98% on the VG-COCO dataset. |
---|---|
DOI: | 10.48550/arxiv.2303.02489 |