OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents
This paper presents OmniJARVIS, a novel Vision-Language-Action (VLA) model for open-world instruction-following agents in Minecraft. Compared to prior works that either emit textual goals to separate controllers or produce the control command directly, OmniJARVIS seeks a different path to ensure bot...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents OmniJARVIS, a novel Vision-Language-Action (VLA) model
for open-world instruction-following agents in Minecraft. Compared to prior
works that either emit textual goals to separate controllers or produce the
control command directly, OmniJARVIS seeks a different path to ensure both
strong reasoning and efficient decision-making capabilities via unified
tokenization of multimodal interaction data. First, we introduce a
self-supervised approach to learn a behavior encoder that produces discretized
tokens for behavior trajectories $\tau = \{o_0, a_0, \dots\}$ and an imitation
learning policy decoder conditioned on these tokens. These additional behavior
tokens will be augmented to the vocabulary of pretrained Multimodal Language
Models. With this encoder, we then pack long-term multimodal interactions
involving task instructions, memories, thoughts, observations, textual
responses, behavior trajectories, etc into unified token sequences and model
them with autoregressive transformers. Thanks to the semantically meaningful
behavior tokens, the resulting VLA model, OmniJARVIS, can reason (by producing
chain-of-thoughts), plan, answer questions, and act (by producing behavior
tokens for the imitation learning policy decoder). OmniJARVIS demonstrates
excellent performances on a comprehensive collection of atomic, programmatic,
and open-ended tasks in open-world Minecraft. Our analysis further unveils the
crucial design principles in interaction data formation, unified tokenization,
and its scaling potentials. The dataset, models, and code will be released at
https://craftjarvis.org/OmniJARVIS. |
---|---|
DOI: | 10.48550/arxiv.2407.00114 |