LoFT: Local Proxy Fine-tuning For Improving Transferability Of Adversarial Attacks Against Large Language Model
It has been shown that Large Language Model (LLM) alignments can be circumvented by appending specially crafted attack suffixes with harmful queries to elicit harmful responses. To conduct attacks against private target models whose characterization is unknown, public models can be used as proxies t...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It has been shown that Large Language Model (LLM) alignments can be
circumvented by appending specially crafted attack suffixes with harmful
queries to elicit harmful responses. To conduct attacks against private target
models whose characterization is unknown, public models can be used as proxies
to fashion the attack, with successful attacks being transferred from public
proxies to private target models. The success rate of attack depends on how
closely the proxy model approximates the private model. We hypothesize that for
attacks to be transferrable, it is sufficient if the proxy can approximate the
target model in the neighborhood of the harmful query. Therefore, in this
paper, we propose \emph{Local Fine-Tuning (LoFT)}, \textit{i.e.}, fine-tuning
proxy models on similar queries that lie in the lexico-semantic neighborhood of
harmful queries to decrease the divergence between the proxy and target models.
First, we demonstrate three approaches to prompt private target models to
obtain similar queries given harmful queries. Next, we obtain data for local
fine-tuning by eliciting responses from target models for the generated similar
queries. Then, we optimize attack suffixes to generate attack prompts and
evaluate the impact of our local fine-tuning on the attack's success rate.
Experiments show that local fine-tuning of proxy models improves attack
transferability and increases attack success rate by $39\%$, $7\%$, and $0.5\%$
(absolute) on target models ChatGPT, GPT-4, and Claude respectively. |
---|---|
DOI: | 10.48550/arxiv.2310.04445 |