META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI
Task-oriented dialogue (TOD) systems have been widely used by mobile phone intelligent assistants to accomplish tasks such as calendar scheduling or hotel reservation. Current TOD systems usually focus on multi-turn text/speech interaction, then they would call back-end APIs designed for TODs to per...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Task-oriented dialogue (TOD) systems have been widely used by mobile phone
intelligent assistants to accomplish tasks such as calendar scheduling or hotel
reservation. Current TOD systems usually focus on multi-turn text/speech
interaction, then they would call back-end APIs designed for TODs to perform
the task. However, this API-based architecture greatly limits the
information-searching capability of intelligent assistants and may even lead to
task failure if TOD-specific APIs are not available or the task is too
complicated to be executed by the provided APIs. In this paper, we propose a
new TOD architecture: GUI-based task-oriented dialogue system (GUI-TOD). A
GUI-TOD system can directly perform GUI operations on real APPs and execute
tasks without invoking TOD-specific backend APIs. Furthermore, we release
META-GUI, a dataset for training a Multi-modal convErsaTional Agent on mobile
GUI. We also propose a multi-model action prediction and response model, which
show promising results on META-GUI. The dataset, codes and leaderboard are
publicly available. |
---|---|
DOI: | 10.48550/arxiv.2205.11029 |