Probabilistic Embeddings for Cross-Modal Retrieval
Cross-modal retrieval methods build a common representation space for samples from multiple modalities, typically from the vision and the language domains. For images and their captions, the multiplicity of the correspondences makes the task particularly challenging. Given an image (respectively a c...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Cross-modal retrieval methods build a common representation space for samples
from multiple modalities, typically from the vision and the language domains.
For images and their captions, the multiplicity of the correspondences makes
the task particularly challenging. Given an image (respectively a caption),
there are multiple captions (respectively images) that equally make sense. In
this paper, we argue that deterministic functions are not sufficiently powerful
to capture such one-to-many correspondences. Instead, we propose to use
Probabilistic Cross-Modal Embedding (PCME), where samples from the different
modalities are represented as probabilistic distributions in the common
embedding space. Since common benchmarks such as COCO suffer from
non-exhaustive annotations for cross-modal matches, we propose to additionally
evaluate retrieval on the CUB dataset, a smaller yet clean database where all
possible image-caption pairs are annotated. We extensively ablate PCME and
demonstrate that it not only improves the retrieval performance over its
deterministic counterpart but also provides uncertainty estimates that render
the embeddings more interpretable. Code is available at
https://github.com/naver-ai/pcme |
---|---|
DOI: | 10.48550/arxiv.2101.05068 |