Optimizing Implementations of Non-Profiled Deep Learning-Based Side-Channel Attacks
The differential deep learning analysis proposed by Timon is the first non-profiled side-channel attack technique that uses deep learning. The technique recovers the secret key using the phenomenon of deep learning metrics. However, the proposed technique made it difficult to observe the results fro...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.5957-5967 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The differential deep learning analysis proposed by Timon is the first non-profiled side-channel attack technique that uses deep learning. The technique recovers the secret key using the phenomenon of deep learning metrics. However, the proposed technique made it difficult to observe the results from the intermediate process, while the neural network had to be retrained repeatedly, as the cost of learning increased with key size. In this paper, we propose three methods to solve the aforementioned problems and any challenges resulting from solving these problems. First, we propose a modified algorithm that allows the monitoring of the metrics in the intermediate process. Second, we propose a parallel neural network architecture and algorithm that works by training a single network, with no need to re-train the same model repeatedly. Attacks were performed faster with the proposed algorithm than with the previous algorithm. Finally, we propose a novel architecture that uses shared layers to solve memory problems in parallel architecture and also helps achieve better performance. We validated the performance of our methods by presenting the results of non-profiled attacks on the benchmark database ASCAD and for a custom dataset on power consumption collected from ChipWhisperer-Lite. On the ASCAD database, our shared layers method was up to 134 times more efficient than the previous method. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3140446 |