SYSTEMS AND METHODS FOR TRAINING SET OBFUSCATION UTILIZING AN INVERTED THREAT MODEL IN A ZERO-TRUST COMPUTING ENVIRONMENT

An algorithm is trained on a dataset to facilitate dynamic data exfiltration protection in a zero-trust environment. An inversion threat model using the original training dataset (a 'gold standard' inversion model) may also be generated. This inversion model can be characterized to determi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Czeszynski, Alan Donald, Chalk, Mary Elizabeth, Rogers, Robert Derward
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:An algorithm is trained on a dataset to facilitate dynamic data exfiltration protection in a zero-trust environment. An inversion threat model using the original training dataset (a 'gold standard' inversion model) may also be generated. This inversion model can be characterized to determine its performance/accuracy of properly identifying a given input as being within the original training dataset or not (a data exfiltration event). It is possible to reduce this risk of data exfiltration to a desired level, without unduly impacting the algorithm's performance using the inversion model for the generation of noise that is targeted (as opposed to Gaussian noise). Noise added to the original training dataset causes the inversion model to perform poorer (meaning data steward data is more secure) but has a corresponding impact on the algorithm accuracy and performance. By adding noise generated by the inversion model in an iterative manner, and measuring the inversion model's performance, a balance can be reached where the data steward's data is considered secure, while minimizing the negative impact on the algorithm performance.