Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment
Recent progress in scaling up large language models has shown impressive capabilities in performing few-shot learning across a wide range of text-based tasks. However, a key limitation is that these language models fundamentally lack visual perception - a crucial attribute needed to extend these mod...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent progress in scaling up large language models has shown impressive
capabilities in performing few-shot learning across a wide range of text-based
tasks. However, a key limitation is that these language models fundamentally
lack visual perception - a crucial attribute needed to extend these models to
be able to interact with the real world and solve vision tasks, such as in
visual-question answering and robotics. Prior works have largely connected
image to text through pretraining and/or fine-tuning on curated image-text
datasets, which can be a costly and expensive process. In order to resolve this
limitation, we propose a simple yet effective approach called
Language-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns to
align text-image data in an unsupervised manner by leveraging pretrained
language models (e.g., BERT, RoBERTa). Our main idea is to encode image as
sequences of text tokens by directly quantizing image embeddings using a
pretrained language codebook. We then apply random masking followed by a BERT
model, and have the decoder reconstruct the original image from BERT predicted
text token embeddings. By doing so, LQAE learns to represent similar images
with similar clusters of text tokens, thereby aligning these two modalities
without the use of aligned text-image pairs. This enables few-shot image
classification with large language models (e.g., GPT-3) as well as linear
classification of images based on BERT text features. To the best of our
knowledge, our work is the first work that uses unaligned images for multimodal
tasks by leveraging the power of pretrained language models. |
---|---|
DOI: | 10.48550/arxiv.2302.00902 |