High Dimensional Spaces, Deep Learning and Adversarial Examples
In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topol...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we analyze deep learning from a mathematical point of view and
derive several novel results. The results are based on intriguing mathematical
properties of high dimensional spaces. We first look at perturbation based
adversarial examples and show how they can be understood using topological and
geometrical arguments in high dimensions. We point out mistake in an argument
presented in prior published literature, and we present a more rigorous,
general and correct mathematical result to explain adversarial examples in
terms of topology of image manifolds. Second, we look at optimization
landscapes of deep neural networks and examine the number of saddle points
relative to that of local minima. Third, we show how multiresolution nature of
images explains perturbation based adversarial examples in form of a stronger
result. Our results state that expectation of $L_2$-norm of adversarial
perturbations is $O\left(\frac{1}{\sqrt{n}}\right)$ and therefore shrinks to 0
as image resolution $n$ becomes arbitrarily large. Finally, by incorporating
the parts-whole manifold learning hypothesis for natural images, we investigate
the working of deep neural networks and root causes of adversarial examples and
discuss how future improvements can be made and how adversarial examples can be
eliminated. |
---|---|
DOI: | 10.48550/arxiv.1801.00634 |