GameVLM: A Decision-making Framework for Robotic Task Planning Based on Visual Language Models and Zero-sum Games
With their prominent scene understanding and reasoning capabilities, pre-trained visual-language models (VLMs) such as GPT-4V have attracted increasing attention in robotic task planning. Compared with traditional task planning strategies, VLMs are strong in multimodal information parsing and code g...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With their prominent scene understanding and reasoning capabilities,
pre-trained visual-language models (VLMs) such as GPT-4V have attracted
increasing attention in robotic task planning. Compared with traditional task
planning strategies, VLMs are strong in multimodal information parsing and code
generation and show remarkable efficiency. Although VLMs demonstrate great
potential in robotic task planning, they suffer from challenges like
hallucination, semantic complexity, and limited context. To handle such issues,
this paper proposes a multi-agent framework, i.e., GameVLM, to enhance the
decision-making process in robotic task planning. In this study, VLM-based
decision and expert agents are presented to conduct the task planning.
Specifically, decision agents are used to plan the task, and the expert agent
is employed to evaluate these task plans. Zero-sum game theory is introduced to
resolve inconsistencies among different agents and determine the optimal
solution. Experimental results on real robots demonstrate the efficacy of the
proposed framework, with an average success rate of 83.3%. |
---|---|
DOI: | 10.48550/arxiv.2405.13751 |