LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching

As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache ope...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-09
Hauptverfasser: Singh, Simranjit, e, Michael, Karatzas, Andreas, Lee, Chaehong, Jian, Yanan, Shangguan, Longfei, Yu, Fuxun, Anagnostopoulos, Iraklis, Stamoulis, Dimitrios
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Singh, Simranjit
e, Michael
Karatzas, Andreas
Lee, Chaehong
Jian, Yanan
Shangguan, Longfei
Yu, Fuxun
Anagnostopoulos, Iraklis
Stamoulis, Dimitrios
description As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3067020607</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3067020607</sourcerecordid><originalsourceid>FETCH-proquest_journals_30670206073</originalsourceid><addsrcrecordid>eNqNjN8KgjAcRkcQJOU7DLoe_NpSo7vQ_oGBF97L0KUT3cpNg56-BT1AV9_FOd-ZIY8ytiG7LaUL5BvTAgANIxoEzENZmt5IFfOyEXt87R-DnqSqca51Rw5j3QtlRYWdZPBL2gafs5wkg5yEwqkueSffDifccvxtuOsKze-8M8L_7RKtT8c8vhCXfo7C2KLV46AcKhiEEVAIIWL_WR9hqz0J</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3067020607</pqid></control><display><type>article</type><title>LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching</title><source>Free E- Journals</source><creator>Singh, Simranjit ; e, Michael ; Karatzas, Andreas ; Lee, Chaehong ; Jian, Yanan ; Shangguan, Longfei ; Yu, Fuxun ; Anagnostopoulos, Iraklis ; Stamoulis, Dimitrios</creator><creatorcontrib>Singh, Simranjit ; e, Michael ; Karatzas, Andreas ; Lee, Chaehong ; Jian, Yanan ; Shangguan, Longfei ; Yu, Fuxun ; Anagnostopoulos, Iraklis ; Stamoulis, Dimitrios</creatorcontrib><description>As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Application programming interface ; Large language models</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Singh, Simranjit</creatorcontrib><creatorcontrib>e, Michael</creatorcontrib><creatorcontrib>Karatzas, Andreas</creatorcontrib><creatorcontrib>Lee, Chaehong</creatorcontrib><creatorcontrib>Jian, Yanan</creatorcontrib><creatorcontrib>Shangguan, Longfei</creatorcontrib><creatorcontrib>Yu, Fuxun</creatorcontrib><creatorcontrib>Anagnostopoulos, Iraklis</creatorcontrib><creatorcontrib>Stamoulis, Dimitrios</creatorcontrib><title>LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching</title><title>arXiv.org</title><description>As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.</description><subject>Application programming interface</subject><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjN8KgjAcRkcQJOU7DLoe_NpSo7vQ_oGBF97L0KUT3cpNg56-BT1AV9_FOd-ZIY8ytiG7LaUL5BvTAgANIxoEzENZmt5IFfOyEXt87R-DnqSqca51Rw5j3QtlRYWdZPBL2gafs5wkg5yEwqkueSffDifccvxtuOsKze-8M8L_7RKtT8c8vhCXfo7C2KLV46AcKhiEEVAIIWL_WR9hqz0J</recordid><startdate>20240921</startdate><enddate>20240921</enddate><creator>Singh, Simranjit</creator><creator>e, Michael</creator><creator>Karatzas, Andreas</creator><creator>Lee, Chaehong</creator><creator>Jian, Yanan</creator><creator>Shangguan, Longfei</creator><creator>Yu, Fuxun</creator><creator>Anagnostopoulos, Iraklis</creator><creator>Stamoulis, Dimitrios</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240921</creationdate><title>LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching</title><author>Singh, Simranjit ; e, Michael ; Karatzas, Andreas ; Lee, Chaehong ; Jian, Yanan ; Shangguan, Longfei ; Yu, Fuxun ; Anagnostopoulos, Iraklis ; Stamoulis, Dimitrios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30670206073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Application programming interface</topic><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Singh, Simranjit</creatorcontrib><creatorcontrib>e, Michael</creatorcontrib><creatorcontrib>Karatzas, Andreas</creatorcontrib><creatorcontrib>Lee, Chaehong</creatorcontrib><creatorcontrib>Jian, Yanan</creatorcontrib><creatorcontrib>Shangguan, Longfei</creatorcontrib><creatorcontrib>Yu, Fuxun</creatorcontrib><creatorcontrib>Anagnostopoulos, Iraklis</creatorcontrib><creatorcontrib>Stamoulis, Dimitrios</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Singh, Simranjit</au><au>e, Michael</au><au>Karatzas, Andreas</au><au>Lee, Chaehong</au><au>Jian, Yanan</au><au>Shangguan, Longfei</au><au>Yu, Fuxun</au><au>Anagnostopoulos, Iraklis</au><au>Stamoulis, Dimitrios</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching</atitle><jtitle>arXiv.org</jtitle><date>2024-09-21</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_3067020607
source Free E- Journals
subjects Application programming interface
Large language models
title LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T04%3A21%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=LLM-dCache:%20Improving%20Tool-Augmented%20LLMs%20with%20GPT-Driven%20Localized%20Data%20Caching&rft.jtitle=arXiv.org&rft.au=Singh,%20Simranjit&rft.date=2024-09-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3067020607%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3067020607&rft_id=info:pmid/&rfr_iscdi=true