LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date th...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-modal language-vision models trained on hundreds of millions of
image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable
capability to perform zero- or few-shot learning and transfer even in absence
of per-sample labels on target image data. Despite this trend, to date there
has been no publicly available datasets of sufficient scale for training such
models from scratch. To address this issue, in a community effort we build and
release for public LAION-400M, a dataset with CLIP-filtered 400 million
image-text pairs, their CLIP embeddings and kNN indices that allow efficient
similarity search. |
---|---|
DOI: | 10.48550/arxiv.2111.02114 |