The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
The concept of learned index structures relies on the idea that the input-output functionality of a database index can be viewed as a prediction task and, thus, be implemented using a machine learning model instead of traditional algorithmic techniques. This novel angle for a decades-old problem has...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The concept of learned index structures relies on the idea that the
input-output functionality of a database index can be viewed as a prediction
task and, thus, be implemented using a machine learning model instead of
traditional algorithmic techniques. This novel angle for a decades-old problem
has inspired numerous exciting results in the intersection of machine learning
and data structures. However, the main advantage of learned index structures,
i.e., the ability to adjust to the data at hand via the underlying ML-model,
can become a disadvantage from a security perspective as it could be exploited.
In this work, we present the first study of poisoning attacks on learned
index structures. The required poisoning approach is different from all
previous works since the model under attack is trained on a cumulative
distribution function (CDF) and, thus, every injection on the training set has
a cascading impact on multiple data values. We formulate the first poisoning
attacks on linear regression models trained on the CDF, which is a basic
building block of the proposed learned index structures. We generalize our
poisoning techniques to attack a more advanced two-stage design of learned
index structures called recursive model index (RMI), which has been shown to
outperform traditional B-Trees. We evaluate our attacks on real-world and
synthetic datasets under a wide variety of parameterizations of the model and
show that the error of the RMI increases up to $300\times$ and the error of its
second-stage models increases up to $3000\times$. |
---|---|
DOI: | 10.48550/arxiv.2008.00297 |