On the Impossible Safety of Large AI Models
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our knowledge about the fundamental impossibility of building arbitra...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large AI Models (LAIMs), of which large language models are the most
prominent recent example, showcase some impressive performance. However they
have been empirically found to pose serious security issues. This paper
systematizes our knowledge about the fundamental impossibility of building
arbitrarily accurate and secure machine learning models. More precisely, we
identify key challenging features of many of today's machine learning settings.
Namely, high accuracy seems to require memorizing large training datasets,
which are often user-generated and highly heterogeneous, with both sensitive
information and fake users. We then survey statistical lower bounds that, we
argue, constitute a compelling case against the possibility of designing
high-accuracy LAIMs with strong security guarantees. |
---|---|
DOI: | 10.48550/arxiv.2209.15259 |