Proposal-Contrastive Pretraining for Object Detection from Fewer Data
The use of pretrained deep neural networks represents an attractive way to achieve strong results with few data available. When specialized in dense problems such as object detection, learning local rather than global information in images has proven to be more efficient. However, for unsupervised p...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The use of pretrained deep neural networks represents an attractive way to
achieve strong results with few data available. When specialized in dense
problems such as object detection, learning local rather than global
information in images has proven to be more efficient. However, for
unsupervised pretraining, the popular contrastive learning requires a large
batch size and, therefore, a lot of resources. To address this problem, we are
interested in transformer-based object detectors that have recently gained
traction in the community with good performance and with the particularity of
generating many diverse object proposals.
In this work, we present Proposal Selection Contrast (ProSeCo), a novel
unsupervised overall pretraining approach that leverages this property. ProSeCo
uses the large number of object proposals generated by the detector for
contrastive learning, which allows the use of a smaller batch size, combined
with object-level features to learn local information in the images. To improve
the effectiveness of the contrastive loss, we introduce the object location
information in the selection of positive examples to take into account multiple
overlapping object proposals. When reusing pretrained backbone, we advocate for
consistency in learning local information between the backbone and the
detection head.
We show that our method outperforms state of the art in unsupervised
pretraining for object detection on standard and novel benchmarks in learning
with fewer data. |
---|---|
DOI: | 10.48550/arxiv.2310.16835 |