Practical Cross-modal Manifold Alignment for Grounded Language
We propose a cross-modality manifold alignment procedure that leverages triplet loss to jointly learn consistent, multi-modal embeddings of language-based concepts of real-world items. Our approach learns these embeddings by sampling triples of anchor, positive, and negative data points from RGB-dep...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a cross-modality manifold alignment procedure that leverages
triplet loss to jointly learn consistent, multi-modal embeddings of
language-based concepts of real-world items. Our approach learns these
embeddings by sampling triples of anchor, positive, and negative data points
from RGB-depth images and their natural language descriptions. We show that our
approach can benefit from, but does not require, post-processing steps such as
Procrustes analysis, in contrast to some of our baselines which require it for
reasonable performance. We demonstrate the effectiveness of our approach on two
datasets commonly used to develop robotic-based grounded language learning
systems, where our approach outperforms four baselines, including a
state-of-the-art approach, across five evaluation metrics. |
---|---|
DOI: | 10.48550/arxiv.2009.05147 |