ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning
Human-AI interactivity is a critical aspect that reflects the usability of multimodal large language models (MLLMs). However, existing end-to-end MLLMs only allow users to interact with them through language instructions, leading to the limitation of the interactive accuracy and efficiency. In this...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human-AI interactivity is a critical aspect that reflects the usability of
multimodal large language models (MLLMs). However, existing end-to-end MLLMs
only allow users to interact with them through language instructions, leading
to the limitation of the interactive accuracy and efficiency. In this study, we
present precise referring instructions that utilize diverse reference
representations such as points and boxes as referring prompts to refer to the
special region. This enables MLLMs to focus on the region of interest and
achieve finer-grained interaction. Based on precise referring instruction, we
propose ChatSpot, a unified end-to-end multimodal large language model that
supports diverse forms of interactivity including mouse clicks, drag-and-drop,
and drawing boxes, which provides a more flexible and seamless interactive
experience. We also construct a multi-grained vision-language
instruction-following dataset based on existing datasets and GPT-4 generating.
Furthermore, we design a series of evaluation tasks to assess the effectiveness
of region recognition and interaction. Experimental results showcase ChatSpot's
promising performance. |
---|---|
DOI: | 10.48550/arxiv.2307.09474 |