Domain wall and Magnetic Tunnel Junction Hybrid for on-chip Learning in UNet architecture

We present spintronic devices based hardware implementation of UNet for segmentation tasks. Our approach involves designing hardware for convolution, deconvolution, rectified activation function (ReLU), and max pooling layers of the UNet architecture. We designed the convolution and deconvolution la...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vadde, Venkatesh, Muralidharan, Bhaskaran, Sharma, Abhishek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present spintronic devices based hardware implementation of UNet for segmentation tasks. Our approach involves designing hardware for convolution, deconvolution, rectified activation function (ReLU), and max pooling layers of the UNet architecture. We designed the convolution and deconvolution layers of the network using the synaptic behavior of the domain wall MTJ. We also construct the ReLU and max pooling functions of the network utilizing the spin hall driven orthogonal current injected MTJ. To incorporate the diverse physics of spin-transport, magnetization dynamics, and CMOS elements in our UNet design, we employ a hybrid simulation setup that couples micromagnetic simulation, non-equilibrium Green's function, SPICE simulation along with network implementation. We evaluate our UNet design on the CamVid dataset and achieve segmentation accuracies of 83.71$\%$ on test data, on par with the software implementation with 821mJ of energy consumption for on-chip training over 150 epochs. We further demonstrate nearly one order $(10\times)$ improvement in the energy requirement of the network using unstable ferromagnet ($\Delta$=4.58) over the stable ferromagnet ($\Delta$=45) based ReLU and max pooling functions while maintaining the similar accuracy. The hybrid architecture comprising domain wall MTJ and unstable FM-based MTJ leads to an on-chip energy consumption of 85.79mJ during training, with a testing energy cost of 1.55 $\mu J$.
DOI:10.48550/arxiv.2403.02863