AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents
Autonomous agents have become increasingly important for interacting with the real world. Android agents, in particular, have been recently a frequently-mentioned interaction method. However, existing studies for training and evaluating Android agents lack systematic research on both open-source and...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Autonomous agents have become increasingly important for interacting with the
real world. Android agents, in particular, have been recently a
frequently-mentioned interaction method. However, existing studies for training
and evaluating Android agents lack systematic research on both open-source and
closed-source models. In this work, we propose AndroidLab as a systematic
Android agent framework. It includes an operation environment with different
modalities, action space, and a reproducible benchmark. It supports both large
language models (LLMs) and multimodal models (LMMs) in the same action space.
AndroidLab benchmark includes predefined Android virtual devices and 138 tasks
across nine apps built on these devices. By using the AndroidLab environment,
we develop an Android Instruction dataset and train six open-source LLMs and
LMMs, lifting the average success rates from 4.59% to 21.50% for LLMs and from
1.93% to 13.28% for LMMs. AndroidLab is open-sourced and publicly available at
https://github.com/THUDM/Android-Lab. |
---|---|
DOI: | 10.48550/arxiv.2410.24024 |