ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents

Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. These API-based agents, leveraging the strong autonomy and planning capabilities of LLMs, can efficiently solve problems requ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shen, Haiyang, Li, Yue, Meng, Desong, Cai, Dongqi, Qi, Sheng, Zhang, Li, Xu, Mengwei, Ma, Yun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shen, Haiyang
Li, Yue
Meng, Desong
Cai, Dongqi
Qi, Sheng
Zhang, Li
Xu, Mengwei
Ma, Yun
description Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. These API-based agents, leveraging the strong autonomy and planning capabilities of LLMs, can efficiently solve problems requiring multi-step actions. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands through APIs remains unknown. In this paper, we introduce \textsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving tasks with varying levels of difficulty, diverse task types, and real-world demands. \textsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc.'s operating systems, refined user queries from shortcuts, human-annotated high-quality action sequences from shortcut developers, and accurate parameter filling values about primitive parameter types, enum parameter types, outputs from previous actions, and parameters that need to request necessary information from the system or user. Our extensive evaluation of agents built with $5$ leading open-source (size >= 57B) and $4$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-3.5) reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users. These findings highlight the challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, and experimental results will be available at \url{https://github.com/eachsheep/shortcutsbench}.
doi_str_mv 10.48550/arxiv.2407.00132
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_00132</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_00132</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_001323</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwMDQ24mRwD87ILypJLi0pdkrNS86wUnBU8EksSk_VDU5OzElVCEpNzNEtzy_KSVEAy-cmFmUrpOUXKTgGeOomJRanpig4pqfmlRTzMLCmJeYUp_JCaW4GeTfXEGcPXbCV8QVFmUCtlfEgq-PBVhsTVgEADPk30Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents</title><source>arXiv.org</source><creator>Shen, Haiyang ; Li, Yue ; Meng, Desong ; Cai, Dongqi ; Qi, Sheng ; Zhang, Li ; Xu, Mengwei ; Ma, Yun</creator><creatorcontrib>Shen, Haiyang ; Li, Yue ; Meng, Desong ; Cai, Dongqi ; Qi, Sheng ; Zhang, Li ; Xu, Mengwei ; Ma, Yun</creatorcontrib><description>Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. These API-based agents, leveraging the strong autonomy and planning capabilities of LLMs, can efficiently solve problems requiring multi-step actions. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands through APIs remains unknown. In this paper, we introduce \textsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving tasks with varying levels of difficulty, diverse task types, and real-world demands. \textsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc.'s operating systems, refined user queries from shortcuts, human-annotated high-quality action sequences from shortcut developers, and accurate parameter filling values about primitive parameter types, enum parameter types, outputs from previous actions, and parameters that need to request necessary information from the system or user. Our extensive evaluation of agents built with $5$ leading open-source (size &gt;= 57B) and $4$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-3.5) reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users. These findings highlight the challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, and experimental results will be available at \url{https://github.com/eachsheep/shortcutsbench}.</description><identifier>DOI: 10.48550/arxiv.2407.00132</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Software Engineering</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.00132$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.00132$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shen, Haiyang</creatorcontrib><creatorcontrib>Li, Yue</creatorcontrib><creatorcontrib>Meng, Desong</creatorcontrib><creatorcontrib>Cai, Dongqi</creatorcontrib><creatorcontrib>Qi, Sheng</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Xu, Mengwei</creatorcontrib><creatorcontrib>Ma, Yun</creatorcontrib><title>ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents</title><description>Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. These API-based agents, leveraging the strong autonomy and planning capabilities of LLMs, can efficiently solve problems requiring multi-step actions. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands through APIs remains unknown. In this paper, we introduce \textsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving tasks with varying levels of difficulty, diverse task types, and real-world demands. \textsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc.'s operating systems, refined user queries from shortcuts, human-annotated high-quality action sequences from shortcut developers, and accurate parameter filling values about primitive parameter types, enum parameter types, outputs from previous actions, and parameters that need to request necessary information from the system or user. Our extensive evaluation of agents built with $5$ leading open-source (size &gt;= 57B) and $4$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-3.5) reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users. These findings highlight the challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, and experimental results will be available at \url{https://github.com/eachsheep/shortcutsbench}.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Software Engineering</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwMDQ24mRwD87ILypJLi0pdkrNS86wUnBU8EksSk_VDU5OzElVCEpNzNEtzy_KSVEAy-cmFmUrpOUXKTgGeOomJRanpig4pqfmlRTzMLCmJeYUp_JCaW4GeTfXEGcPXbCV8QVFmUCtlfEgq-PBVhsTVgEADPk30Q</recordid><startdate>20240628</startdate><enddate>20240628</enddate><creator>Shen, Haiyang</creator><creator>Li, Yue</creator><creator>Meng, Desong</creator><creator>Cai, Dongqi</creator><creator>Qi, Sheng</creator><creator>Zhang, Li</creator><creator>Xu, Mengwei</creator><creator>Ma, Yun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240628</creationdate><title>ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents</title><author>Shen, Haiyang ; Li, Yue ; Meng, Desong ; Cai, Dongqi ; Qi, Sheng ; Zhang, Li ; Xu, Mengwei ; Ma, Yun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_001323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Software Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Shen, Haiyang</creatorcontrib><creatorcontrib>Li, Yue</creatorcontrib><creatorcontrib>Meng, Desong</creatorcontrib><creatorcontrib>Cai, Dongqi</creatorcontrib><creatorcontrib>Qi, Sheng</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Xu, Mengwei</creatorcontrib><creatorcontrib>Ma, Yun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shen, Haiyang</au><au>Li, Yue</au><au>Meng, Desong</au><au>Cai, Dongqi</au><au>Qi, Sheng</au><au>Zhang, Li</au><au>Xu, Mengwei</au><au>Ma, Yun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents</atitle><date>2024-06-28</date><risdate>2024</risdate><abstract>Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. These API-based agents, leveraging the strong autonomy and planning capabilities of LLMs, can efficiently solve problems requiring multi-step actions. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands through APIs remains unknown. In this paper, we introduce \textsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving tasks with varying levels of difficulty, diverse task types, and real-world demands. \textsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc.'s operating systems, refined user queries from shortcuts, human-annotated high-quality action sequences from shortcut developers, and accurate parameter filling values about primitive parameter types, enum parameter types, outputs from previous actions, and parameters that need to request necessary information from the system or user. Our extensive evaluation of agents built with $5$ leading open-source (size &gt;= 57B) and $4$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-3.5) reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users. These findings highlight the challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, and experimental results will be available at \url{https://github.com/eachsheep/shortcutsbench}.</abstract><doi>10.48550/arxiv.2407.00132</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.00132
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_00132
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Software Engineering
title ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T23%3A22%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ShortcutsBench:%20A%20Large-Scale%20Real-world%20Benchmark%20for%20API-based%20Agents&rft.au=Shen,%20Haiyang&rft.date=2024-06-28&rft_id=info:doi/10.48550/arxiv.2407.00132&rft_dat=%3Carxiv_GOX%3E2407_00132%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true