Efficient Multi-Modal Embeddings from Structured Data
Multi-modal word semantics aims to enhance embeddings with perceptual input, assuming that human meaning representation is grounded in sensory experience. Most research focuses on evaluation involving direct visual input, however, visual grounding can contribute to linguistic applications as well. A...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-modal word semantics aims to enhance embeddings with perceptual input,
assuming that human meaning representation is grounded in sensory experience.
Most research focuses on evaluation involving direct visual input, however,
visual grounding can contribute to linguistic applications as well. Another
motivation for this paper is the growing need for more interpretable models and
for evaluating model efficiency regarding size and performance. This work
explores the impact of visual information for semantics when the evaluation
involves no direct visual input, specifically semantic similarity and
relatedness. We investigate a new embedding type in-between linguistic and
visual modalities, based on the structured annotations of Visual Genome. We
compare uni- and multi-modal models including structured, linguistic and image
based representations. We measure the efficiency of each model with regard to
data and model size, modality / data distribution and information gain. The
analysis includes an interpretation of embedding structures. We found that this
new embedding conveys complementary information for text based embeddings. It
achieves comparable performance in an economic way, using orders of magnitude
less resources than visual models. |
---|---|
DOI: | 10.48550/arxiv.2110.02577 |