CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering
Reverse engineering (RE) in Integrated Circuits (IC) is a process in which one will attempt to extract the internals of an IC, extract the circuit structure, and determine the gate-level information of an IC. In general, the RE process can be done for validation as well as Intellectual Property (IP)...
Gespeichert in:
Veröffentlicht in: | Information (Basel) 2023-12, Vol.14 (12), p.656 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reverse engineering (RE) in Integrated Circuits (IC) is a process in which one will attempt to extract the internals of an IC, extract the circuit structure, and determine the gate-level information of an IC. In general, the RE process can be done for validation as well as Intellectual Property (IP) stealing intentions. In addition, RE also facilitates different illicit activities such as the insertion of hardware Trojan, pirating, or counterfeiting a design, or developing an attack. In this work, we propose an approach to introduce cognitive perturbations, with the aid of adversarial machine learning, to the IC layout that could prevent the RE process from succeeding. We first construct a layer-by-layer image dataset of 45 nm predictive technology. With this dataset, we propose a conventional neural network model called RecoG-Net to recognize the logic gates, which is the first step in RE. RecoG-Net is successful in recognizing the gates with more than 99.7% accuracy. Our thwarting approach utilizes the concept of adversarial attack generation algorithms to generate perturbation. Unlike traditional adversarial attacks in machine learning, the perturbation generation needs to be highly constrained to meet the fab rules such as Design Rule Checking (DRC) Layout vs. Schematic (LVS) checks. Hence, we propose CAPTIVE as a constrained perturbation generation satisfying the DRC. The experiments show that the accuracy of reverse engineering using machine learning techniques can decrease from 100% to approximately 30% based on the adversary generator. |
---|---|
ISSN: | 2078-2489 2078-2489 |
DOI: | 10.3390/info14120656 |