Attacks on Third-Party APIs of Large Language Models

Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper pro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhao, Wanru, Khazanchi, Vidit, Xing, Haodi, He, Xuanli, Xu, Qiongkai, Lane, Nicholas Donald
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhao, Wanru
Khazanchi, Vidit
Xing, Haodi
He, Xuanli
Xu, Qiongkai
Lane, Nicholas Donald
description Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward. Our code is released at https://github.com/vk0812/Third-Party-Attacks-on-LLMs.
doi_str_mv 10.48550/arxiv.2404.16891
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_16891</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_16891</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-875c0d6d3137d1bc5074ca4c0e38f1cdf725e5ac9b8a842a7a02fbe13cccf90a3</originalsourceid><addsrcrecordid>eNotjssOgjAURLtxYdQPcCU_ALa0pWVJjK8Eowv25HLbKvGZgkb_XnxsZiZnMTmEjBmNhJaSTsE_60cUCyoiluiU9YnI2hbw2ATXS1Acam_CHfj2FWS7dcdckIPf2y4v-zt0Y3M19tQMSc_BqbGjfw9IsZgXs1WYb5frWZaHkCgWaiWRmsRwxpVhFUqqBIJAarl2DI1TsbQSMK00aBGDAhq7yjKOiC6lwAdk8rv9apc3X5_Bv8qPfvnV52-9MD7w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Attacks on Third-Party APIs of Large Language Models</title><source>arXiv.org</source><creator>Zhao, Wanru ; Khazanchi, Vidit ; Xing, Haodi ; He, Xuanli ; Xu, Qiongkai ; Lane, Nicholas Donald</creator><creatorcontrib>Zhao, Wanru ; Khazanchi, Vidit ; Xing, Haodi ; He, Xuanli ; Xu, Qiongkai ; Lane, Nicholas Donald</creatorcontrib><description>Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward. Our code is released at https://github.com/vk0812/Third-Party-Attacks-on-LLMs.</description><identifier>DOI: 10.48550/arxiv.2404.16891</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computers and Society ; Computer Science - Cryptography and Security</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.16891$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.16891$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Wanru</creatorcontrib><creatorcontrib>Khazanchi, Vidit</creatorcontrib><creatorcontrib>Xing, Haodi</creatorcontrib><creatorcontrib>He, Xuanli</creatorcontrib><creatorcontrib>Xu, Qiongkai</creatorcontrib><creatorcontrib>Lane, Nicholas Donald</creatorcontrib><title>Attacks on Third-Party APIs of Large Language Models</title><description>Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward. Our code is released at https://github.com/vk0812/Third-Party-Attacks-on-LLMs.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjssOgjAURLtxYdQPcCU_ALa0pWVJjK8Eowv25HLbKvGZgkb_XnxsZiZnMTmEjBmNhJaSTsE_60cUCyoiluiU9YnI2hbw2ATXS1Acam_CHfj2FWS7dcdckIPf2y4v-zt0Y3M19tQMSc_BqbGjfw9IsZgXs1WYb5frWZaHkCgWaiWRmsRwxpVhFUqqBIJAarl2DI1TsbQSMK00aBGDAhq7yjKOiC6lwAdk8rv9apc3X5_Bv8qPfvnV52-9MD7w</recordid><startdate>20240424</startdate><enddate>20240424</enddate><creator>Zhao, Wanru</creator><creator>Khazanchi, Vidit</creator><creator>Xing, Haodi</creator><creator>He, Xuanli</creator><creator>Xu, Qiongkai</creator><creator>Lane, Nicholas Donald</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240424</creationdate><title>Attacks on Third-Party APIs of Large Language Models</title><author>Zhao, Wanru ; Khazanchi, Vidit ; Xing, Haodi ; He, Xuanli ; Xu, Qiongkai ; Lane, Nicholas Donald</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-875c0d6d3137d1bc5074ca4c0e38f1cdf725e5ac9b8a842a7a02fbe13cccf90a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Wanru</creatorcontrib><creatorcontrib>Khazanchi, Vidit</creatorcontrib><creatorcontrib>Xing, Haodi</creatorcontrib><creatorcontrib>He, Xuanli</creatorcontrib><creatorcontrib>Xu, Qiongkai</creatorcontrib><creatorcontrib>Lane, Nicholas Donald</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Wanru</au><au>Khazanchi, Vidit</au><au>Xing, Haodi</au><au>He, Xuanli</au><au>Xu, Qiongkai</au><au>Lane, Nicholas Donald</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Attacks on Third-Party APIs of Large Language Models</atitle><date>2024-04-24</date><risdate>2024</risdate><abstract>Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs, but it also introduces risks, as these plugins developed by various third parties cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward. Our code is released at https://github.com/vk0812/Third-Party-Attacks-on-LLMs.</abstract><doi>10.48550/arxiv.2404.16891</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.16891
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_16891
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computers and Society
Computer Science - Cryptography and Security
title Attacks on Third-Party APIs of Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T16%3A06%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Attacks%20on%20Third-Party%20APIs%20of%20Large%20Language%20Models&rft.au=Zhao,%20Wanru&rft.date=2024-04-24&rft_id=info:doi/10.48550/arxiv.2404.16891&rft_dat=%3Carxiv_GOX%3E2404_16891%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true