Injecting Image Details into CLIP's Feature Space
Although CLIP-like Visual Language Models provide a functional joint feature space for image and text, due to the limitation of the CILP-like model's image input size (e.g., 224), subtle details are lost in the feature representation if we input high-resolution images (e.g., 2240). In this work...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although CLIP-like Visual Language Models provide a functional joint feature
space for image and text, due to the limitation of the CILP-like model's image
input size (e.g., 224), subtle details are lost in the feature representation
if we input high-resolution images (e.g., 2240). In this work, we introduce an
efficient framework that can produce a single feature representation for a
high-resolution image that injects image details and shares the same semantic
space as the original CLIP. In the framework, we train a feature fusing model
based on CLIP features extracted from a carefully designed image patch method
that can cover objects of any scale, weakly supervised by image-agnostic class
prompted queries. We validate our framework by retrieving images from class
prompted queries on the real world and synthetic datasets, showing significant
performance improvement on these tasks. Furthermore, to fully demonstrate our
framework's detail retrieval ability, we construct a CLEVR-like synthetic
dataset called CLVER-DS, which is fully annotated and has a controllable object
scale. |
---|---|
DOI: | 10.48550/arxiv.2208.14649 |