Treant: Training Evasion-Aware Decision Trees
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i.e., carefully crafted perturbations of test inputs designed to force prediction errors. In this paper we focus on evasion attacks against decision tree ensembles, which are among the most succe...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite its success and popularity, machine learning is now recognized as
vulnerable to evasion attacks, i.e., carefully crafted perturbations of test
inputs designed to force prediction errors. In this paper we focus on evasion
attacks against decision tree ensembles, which are among the most successful
predictive models for dealing with non-perceptual problems. Even though they
are powerful and interpretable, decision tree ensembles have received only
limited attention by the security and machine learning communities so far,
leading to a sub-optimal state of the art for adversarial learning techniques.
We thus propose Treant, a novel decision tree learning algorithm that, on the
basis of a formal threat model, minimizes an evasion-aware loss function at
each step of the tree construction. Treant is based on two key technical
ingredients: robust splitting and attack invariance, which jointly guarantee
the soundness of the learning process. Experimental results on three publicly
available datasets show that Treant is able to generate decision tree ensembles
that are at the same time accurate and nearly insensitive to evasion attacks,
outperforming state-of-the-art adversarial learning techniques. |
---|---|
DOI: | 10.48550/arxiv.1907.01197 |