EcoAssistant: Using LLM Assistant More Affordably and Accurately
Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood. These queries require the LLM to produce code that...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhang, Jieyu Krishna, Ranjay Awadallah, Ahmed H Wang, Chi |
description | Today, users ask Large language models (LLMs) as assistants to answer queries
that require external knowledge; they ask about the weather in a specific city,
about stock prices, and even about where specific locations are within their
neighborhood. These queries require the LLM to produce code that invokes
external APIs to answer the user's question, yet LLMs rarely produce correct
code on the first try, requiring iterative code refinement upon execution
results. In addition, using LLM assistants to support high query volumes can be
expensive. In this work, we contribute a framework, EcoAssistant, that enables
LLMs to answer code-driven queries more affordably and accurately. EcoAssistant
contains three components. First, it allows the LLM assistants to converse with
an automatic code executor to iteratively refine code or to produce answers
based on the execution results. Second, we use a hierarchy of LLM assistants,
which attempts to answer the query with weaker, cheaper LLMs before backing off
to stronger, expensive ones. Third, we retrieve solutions from past successful
queries as in-context demonstrations to help subsequent queries. Empirically,
we show that EcoAssistant offers distinct advantages for affordability and
accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of
GPT-4's cost. |
doi_str_mv | 10.48550/arxiv.2310.03046 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_03046</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_03046</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-d116d1d10c4798dc79fc42622d1ddc709d587fd5b4adbed92fe9cfe63ef8fcd83</originalsourceid><addsrcrecordid>eNo9j81KxDAURrNxIaMP4Mq8QMf8NU1cWYbxBzq4GdflNjdXCrWVpIp9e-sorj6-szhwGLuSYmtcWYobSF_951bpFQgtjD1nd_sw1Tn3eYZxvuUvuR9fedMc-D_khylFXhNNCaEbFg4j8jqEjwRzHJYLdkYw5Hj5txt2vN8fd49F8_zwtKubAmxlC5TSokQpgqm8w1B5CkZZpVa4PuGxdBVh2RnALqJXFH2gaHUkRwGd3rDrX-0poX1P_Rukpf1JaU8p-hvggER6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>EcoAssistant: Using LLM Assistant More Affordably and Accurately</title><source>arXiv.org</source><creator>Zhang, Jieyu ; Krishna, Ranjay ; Awadallah, Ahmed H ; Wang, Chi</creator><creatorcontrib>Zhang, Jieyu ; Krishna, Ranjay ; Awadallah, Ahmed H ; Wang, Chi</creatorcontrib><description>Today, users ask Large language models (LLMs) as assistants to answer queries
that require external knowledge; they ask about the weather in a specific city,
about stock prices, and even about where specific locations are within their
neighborhood. These queries require the LLM to produce code that invokes
external APIs to answer the user's question, yet LLMs rarely produce correct
code on the first try, requiring iterative code refinement upon execution
results. In addition, using LLM assistants to support high query volumes can be
expensive. In this work, we contribute a framework, EcoAssistant, that enables
LLMs to answer code-driven queries more affordably and accurately. EcoAssistant
contains three components. First, it allows the LLM assistants to converse with
an automatic code executor to iteratively refine code or to produce answers
based on the execution results. Second, we use a hierarchy of LLM assistants,
which attempts to answer the query with weaker, cheaper LLMs before backing off
to stronger, expensive ones. Third, we retrieve solutions from past successful
queries as in-context demonstrations to help subsequent queries. Empirically,
we show that EcoAssistant offers distinct advantages for affordability and
accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of
GPT-4's cost.</description><identifier>DOI: 10.48550/arxiv.2310.03046</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Software Engineering</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.03046$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.03046$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Jieyu</creatorcontrib><creatorcontrib>Krishna, Ranjay</creatorcontrib><creatorcontrib>Awadallah, Ahmed H</creatorcontrib><creatorcontrib>Wang, Chi</creatorcontrib><title>EcoAssistant: Using LLM Assistant More Affordably and Accurately</title><description>Today, users ask Large language models (LLMs) as assistants to answer queries
that require external knowledge; they ask about the weather in a specific city,
about stock prices, and even about where specific locations are within their
neighborhood. These queries require the LLM to produce code that invokes
external APIs to answer the user's question, yet LLMs rarely produce correct
code on the first try, requiring iterative code refinement upon execution
results. In addition, using LLM assistants to support high query volumes can be
expensive. In this work, we contribute a framework, EcoAssistant, that enables
LLMs to answer code-driven queries more affordably and accurately. EcoAssistant
contains three components. First, it allows the LLM assistants to converse with
an automatic code executor to iteratively refine code or to produce answers
based on the execution results. Second, we use a hierarchy of LLM assistants,
which attempts to answer the query with weaker, cheaper LLMs before backing off
to stronger, expensive ones. Third, we retrieve solutions from past successful
queries as in-context demonstrations to help subsequent queries. Empirically,
we show that EcoAssistant offers distinct advantages for affordability and
accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of
GPT-4's cost.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Software Engineering</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo9j81KxDAURrNxIaMP4Mq8QMf8NU1cWYbxBzq4GdflNjdXCrWVpIp9e-sorj6-szhwGLuSYmtcWYobSF_951bpFQgtjD1nd_sw1Tn3eYZxvuUvuR9fedMc-D_khylFXhNNCaEbFg4j8jqEjwRzHJYLdkYw5Hj5txt2vN8fd49F8_zwtKubAmxlC5TSokQpgqm8w1B5CkZZpVa4PuGxdBVh2RnALqJXFH2gaHUkRwGd3rDrX-0poX1P_Rukpf1JaU8p-hvggER6</recordid><startdate>20231003</startdate><enddate>20231003</enddate><creator>Zhang, Jieyu</creator><creator>Krishna, Ranjay</creator><creator>Awadallah, Ahmed H</creator><creator>Wang, Chi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231003</creationdate><title>EcoAssistant: Using LLM Assistant More Affordably and Accurately</title><author>Zhang, Jieyu ; Krishna, Ranjay ; Awadallah, Ahmed H ; Wang, Chi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-d116d1d10c4798dc79fc42622d1ddc709d587fd5b4adbed92fe9cfe63ef8fcd83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Software Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Jieyu</creatorcontrib><creatorcontrib>Krishna, Ranjay</creatorcontrib><creatorcontrib>Awadallah, Ahmed H</creatorcontrib><creatorcontrib>Wang, Chi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Jieyu</au><au>Krishna, Ranjay</au><au>Awadallah, Ahmed H</au><au>Wang, Chi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EcoAssistant: Using LLM Assistant More Affordably and Accurately</atitle><date>2023-10-03</date><risdate>2023</risdate><abstract>Today, users ask Large language models (LLMs) as assistants to answer queries
that require external knowledge; they ask about the weather in a specific city,
about stock prices, and even about where specific locations are within their
neighborhood. These queries require the LLM to produce code that invokes
external APIs to answer the user's question, yet LLMs rarely produce correct
code on the first try, requiring iterative code refinement upon execution
results. In addition, using LLM assistants to support high query volumes can be
expensive. In this work, we contribute a framework, EcoAssistant, that enables
LLMs to answer code-driven queries more affordably and accurately. EcoAssistant
contains three components. First, it allows the LLM assistants to converse with
an automatic code executor to iteratively refine code or to produce answers
based on the execution results. Second, we use a hierarchy of LLM assistants,
which attempts to answer the query with weaker, cheaper LLMs before backing off
to stronger, expensive ones. Third, we retrieve solutions from past successful
queries as in-context demonstrations to help subsequent queries. Empirically,
we show that EcoAssistant offers distinct advantages for affordability and
accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of
GPT-4's cost.</abstract><doi>10.48550/arxiv.2310.03046</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2310.03046 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2310_03046 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Software Engineering |
title | EcoAssistant: Using LLM Assistant More Affordably and Accurately |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T01%3A48%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EcoAssistant:%20Using%20LLM%20Assistant%20More%20Affordably%20and%20Accurately&rft.au=Zhang,%20Jieyu&rft.date=2023-10-03&rft_id=info:doi/10.48550/arxiv.2310.03046&rft_dat=%3Carxiv_GOX%3E2310_03046%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |