Defense generator and method for preventing attack on AI unit, and computer readable storage medium

The present application relates to a defense generator (20) for dynamically generating at least one AI defense module (16). The core feature of the method is to determine the distribution function of the model data. The present application is based on a hypothesis that model data belongs to a model...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ASEN, FELIX, KRECHMER, FRANK, GESSNER FLORENCE FABIAN, HINZE, STEPHAN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The present application relates to a defense generator (20) for dynamically generating at least one AI defense module (16). The core feature of the method is to determine the distribution function of the model data. The present application is based on a hypothesis that model data belongs to a model manifold or has similar statistical behavior. Thus, it may be determined for the input dataset whether data of the input dataset may be associated with an adversarial attack. For example, if statistical anomalies are found in the input dataset, it may be determined that data of the input dataset may be associated with an adversarial attack. 本申请涉及一种用于动态生成至少一个AI防御模块(16)的防御生成器(20)。本申请的核心特征是确定模型数据的分布函数。本申请基于模型数据属于模型流形或具有相似统计行为的假设。因此,可以对输入数据集确定输入数据集的数据是否可以与对抗攻击相关联。例如,如果在输入数据集中发现统计异常,则可以确定输入数据集的数据可以与对抗攻击相关联。