Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel
Creating high-quality data for training robust language-instructed agents is a long-lasting challenge in embodied AI. In this paper, we introduce a Self-Refining Data Flywheel (SRDF) that generates high-quality and large-scale navigational instruction-trajectory pairs by iteratively refining the dat...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Creating high-quality data for training robust language-instructed agents is
a long-lasting challenge in embodied AI. In this paper, we introduce a
Self-Refining Data Flywheel (SRDF) that generates high-quality and large-scale
navigational instruction-trajectory pairs by iteratively refining the data pool
through the collaboration between two models, the instruction generator and the
navigator, without any human-in-the-loop annotation. Specifically, SRDF starts
with using a base generator to create an initial data pool for training a base
navigator, followed by applying the trained navigator to filter the data pool.
This leads to higher-fidelity data to train a better generator, which can, in
turn, produce higher-quality data for training the next-round navigator. Such a
flywheel establishes a data self-refining process, yielding a continuously
improved and highly effective dataset for large-scale language-guided
navigation learning. Our experiments demonstrate that after several flywheel
rounds, the navigator elevates the performance boundary from 70% to 78% SPL on
the classic R2R test set, surpassing human performance (76%) for the first
time. Meanwhile, this process results in a superior generator, evidenced by a
SPICE increase from 23.5 to 26.2, better than all previous VLN instruction
generation methods. Finally, we demonstrate the scalability of our method
through increasing environment and instruction diversity, and the
generalization ability of our pre-trained navigator across various downstream
navigation tasks, surpassing state-of-the-art methods by a large margin in all
cases. |
---|---|
DOI: | 10.48550/arxiv.2412.08467 |