Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models

Large Language Models (LLMs) have demonstrated remarkable capabilities in generating coherent text but remain limited by the static nature of their training data. Retrieval Augmented Generation (RAG) addresses this issue by combining LLMs with up-to-date information retrieval, but also expand the at...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Clop, Cody, Teglia, Yannick
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Clop, Cody
Teglia, Yannick
description Large Language Models (LLMs) have demonstrated remarkable capabilities in generating coherent text but remain limited by the static nature of their training data. Retrieval Augmented Generation (RAG) addresses this issue by combining LLMs with up-to-date information retrieval, but also expand the attack surface of the system. This paper investigates prompt injection attacks on RAG, focusing on malicious objectives beyond misinformation, such as inserting harmful links, promoting unauthorized services, and initiating denial-of-service behaviors. We build upon existing corpus poisoning techniques and propose a novel backdoor attack aimed at the fine-tuning process of the dense retriever component. Our experiments reveal that corpus poisoning can achieve significant attack success rates through the injection of a small number of compromised documents into the retriever corpus. In contrast, backdoor attacks demonstrate even higher success rates but necessitate a more complex setup, as the victim must fine-tune the retriever using the attacker poisoned dataset.
doi_str_mv 10.48550/arxiv.2410.14479
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_14479</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_14479</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_144793</originalsourceid><addsrcrecordid>eNqFjssKwjAQRbNxIeoHuDI_YG01RV1W8QUKIu7L0E5LtEnKJC3698bi3s2dy3AuHMbGURiIVRyHM6CXbIO58I9IiOW6z9QGsmduDGHOb-hIYotkeWGIX8mo2vGTfmDmpNE8cc7Dlvv6Q6HiSVMq1M7PD6iRoCNNwc9AJfrUZQO-XEyOlR2yXgGVxdHvDthkv7tvj9NOLK1JKqB3-hVMO8HFf-IDX5dHag</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models</title><source>arXiv.org</source><creator>Clop, Cody ; Teglia, Yannick</creator><creatorcontrib>Clop, Cody ; Teglia, Yannick</creatorcontrib><description>Large Language Models (LLMs) have demonstrated remarkable capabilities in generating coherent text but remain limited by the static nature of their training data. Retrieval Augmented Generation (RAG) addresses this issue by combining LLMs with up-to-date information retrieval, but also expand the attack surface of the system. This paper investigates prompt injection attacks on RAG, focusing on malicious objectives beyond misinformation, such as inserting harmful links, promoting unauthorized services, and initiating denial-of-service behaviors. We build upon existing corpus poisoning techniques and propose a novel backdoor attack aimed at the fine-tuning process of the dense retriever component. Our experiments reveal that corpus poisoning can achieve significant attack success rates through the injection of a small number of compromised documents into the retriever corpus. In contrast, backdoor attacks demonstrate even higher success rates but necessitate a more complex setup, as the victim must fine-tune the retriever using the attacker poisoned dataset.</description><identifier>DOI: 10.48550/arxiv.2410.14479</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.14479$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.14479$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Clop, Cody</creatorcontrib><creatorcontrib>Teglia, Yannick</creatorcontrib><title>Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models</title><description>Large Language Models (LLMs) have demonstrated remarkable capabilities in generating coherent text but remain limited by the static nature of their training data. Retrieval Augmented Generation (RAG) addresses this issue by combining LLMs with up-to-date information retrieval, but also expand the attack surface of the system. This paper investigates prompt injection attacks on RAG, focusing on malicious objectives beyond misinformation, such as inserting harmful links, promoting unauthorized services, and initiating denial-of-service behaviors. We build upon existing corpus poisoning techniques and propose a novel backdoor attack aimed at the fine-tuning process of the dense retriever component. Our experiments reveal that corpus poisoning can achieve significant attack success rates through the injection of a small number of compromised documents into the retriever corpus. In contrast, backdoor attacks demonstrate even higher success rates but necessitate a more complex setup, as the victim must fine-tune the retriever using the attacker poisoned dataset.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjssKwjAQRbNxIeoHuDI_YG01RV1W8QUKIu7L0E5LtEnKJC3698bi3s2dy3AuHMbGURiIVRyHM6CXbIO58I9IiOW6z9QGsmduDGHOb-hIYotkeWGIX8mo2vGTfmDmpNE8cc7Dlvv6Q6HiSVMq1M7PD6iRoCNNwc9AJfrUZQO-XEyOlR2yXgGVxdHvDthkv7tvj9NOLK1JKqB3-hVMO8HFf-IDX5dHag</recordid><startdate>20241018</startdate><enddate>20241018</enddate><creator>Clop, Cody</creator><creator>Teglia, Yannick</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241018</creationdate><title>Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models</title><author>Clop, Cody ; Teglia, Yannick</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_144793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Clop, Cody</creatorcontrib><creatorcontrib>Teglia, Yannick</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Clop, Cody</au><au>Teglia, Yannick</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models</atitle><date>2024-10-18</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have demonstrated remarkable capabilities in generating coherent text but remain limited by the static nature of their training data. Retrieval Augmented Generation (RAG) addresses this issue by combining LLMs with up-to-date information retrieval, but also expand the attack surface of the system. This paper investigates prompt injection attacks on RAG, focusing on malicious objectives beyond misinformation, such as inserting harmful links, promoting unauthorized services, and initiating denial-of-service behaviors. We build upon existing corpus poisoning techniques and propose a novel backdoor attack aimed at the fine-tuning process of the dense retriever component. Our experiments reveal that corpus poisoning can achieve significant attack success rates through the injection of a small number of compromised documents into the retriever corpus. In contrast, backdoor attacks demonstrate even higher success rates but necessitate a more complex setup, as the victim must fine-tune the retriever using the attacker poisoned dataset.</abstract><doi>10.48550/arxiv.2410.14479</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.14479
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_14479
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T06%3A46%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Backdoored%20Retrievers%20for%20Prompt%20Injection%20Attacks%20on%20Retrieval%20Augmented%20Generation%20of%20Large%20Language%20Models&rft.au=Clop,%20Cody&rft.date=2024-10-18&rft_id=info:doi/10.48550/arxiv.2410.14479&rft_dat=%3Carxiv_GOX%3E2410_14479%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true