Multitask Multimodal Prompted Training for Interactive Embodied Task Completion
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Interactive and embodied tasks pose at least two fundamental challenges to
existing Vision & Language (VL) models, including 1) grounding language in
trajectories of actions and observations, and 2) referential disambiguation. To
tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a
unified encoder-decoder model that reasons over images and trajectories, and
casts action prediction as multimodal text generation. By unifying all tasks as
text generation, EMMA learns a language of actions which facilitates transfer
across tasks. Different to previous modular approaches with independently
trained components, we use a single multitask model where each task contributes
to goal completion. EMMA performs on par with similar models on several VL
benchmarks and sets a new state-of-the-art performance (36.81% success rate) on
the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided
agents in the Alexa Arena |
---|---|
DOI: | 10.48550/arxiv.2311.04067 |