FashionVQA: A Domain-Specific Visual Question Answering System
Humans apprehend the world through various sensory modalities, yet language is their predominant communication channel. Machine learning systems need to draw on the same multimodal richness to have informed discourses with humans in natural language; this is particularly true for systems specialized...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans apprehend the world through various sensory modalities, yet language
is their predominant communication channel. Machine learning systems need to
draw on the same multimodal richness to have informed discourses with humans in
natural language; this is particularly true for systems specialized in
visually-dense information, such as dialogue, recommendation, and search
engines for clothing. To this end, we train a visual question answering (VQA)
system to answer complex natural language questions about apparel in fashion
photoshoot images. The key to the successful training of our VQA model is the
automatic creation of a visual question-answering dataset with 168 million
samples from item attributes of 207 thousand images using diverse templates.
The sample generation employs a strategy that considers the difficulty of the
question-answer pairs to emphasize challenging concepts. Contrary to the recent
trends in using several datasets for pretraining the visual question answering
models, we focused on keeping the dataset fixed while training various models
from scratch to isolate the improvements from model architecture changes. We
see that using the same transformer for encoding the question and decoding the
answer, as in language models, achieves maximum accuracy, showing that visual
language models (VLMs) make the best visual question answering systems for our
dataset. The accuracy of the best model surpasses the human expert level, even
when answering human-generated questions that are not confined to the template
formats. Our approach for generating a large-scale multimodal domain-specific
dataset provides a path for training specialized models capable of
communicating in natural language. The training of such domain-expert models,
e.g., our fashion VLM model, cannot rely solely on the large-scale
general-purpose datasets collected from the web. |
---|---|
DOI: | 10.48550/arxiv.2208.11253 |