MOFI: Learning Image Representations from Noisy Entity Annotated Images
We present MOFI, Manifold OF Images, a new vision foundation model designed to learn image representations from noisy entity annotated images. MOFI differs from previous work in two key aspects: (i) pre-training data, and (ii) training recipe. Regarding data, we introduce a new approach to automatic...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present MOFI, Manifold OF Images, a new vision foundation model designed
to learn image representations from noisy entity annotated images. MOFI differs
from previous work in two key aspects: (i) pre-training data, and (ii) training
recipe. Regarding data, we introduce a new approach to automatically assign
entity labels to images from noisy image-text pairs. Our approach involves
employing a named entity recognition model to extract entities from the
alt-text, and then using a CLIP model to select the correct entities as labels
of the paired image. It's a simple, cost-effective method that can scale to
handle billions of web-mined image-text pairs. Through this method, we have
created Image-to-Entities (I2E), a new dataset with 1 billion images and 2
million distinct entities, covering rich visual concepts in the wild. Building
upon the I2E dataset, we study different training recipes like supervised
pre-training, contrastive pre-training, and multi-task learning. For
contrastive pre-training, we treat entity names as free-form text, and further
enrich them with entity descriptions. Experiments show that supervised
pre-training with large-scale fine-grained entity labels is highly effective
for image retrieval tasks, and multi-task training further improves the
performance. The final MOFI model achieves 86.66% mAP on the challenging
GPR1200 dataset, surpassing the previous state-of-the-art performance of 72.19%
from OpenAI's CLIP model. Further experiments on zero-shot and linear probe
image classification also show that MOFI outperforms a CLIP model trained on
the original image-text data, demonstrating the effectiveness of the I2E
dataset in learning strong image representations. We release our code and model
weights at https://github.com/apple/ml-mofi. |
---|---|
DOI: | 10.48550/arxiv.2306.07952 |