Realizing Linear Synaptic Plasticity in Electric Double Layer-Gated Transistors for Improved Predictive Accuracy and Efficiency in Neuromorphic Computing
Neuromorphic computing offers a low-power, parallel alternative to traditional von Neumann architectures by addressing the sequential data processing bottlenecks. Electric double layer-gated transistors (EDLTs) resemble biological synapses with their ionic response and offer low power operations, ma...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neuromorphic computing offers a low-power, parallel alternative to
traditional von Neumann architectures by addressing the sequential data
processing bottlenecks. Electric double layer-gated transistors (EDLTs)
resemble biological synapses with their ionic response and offer low power
operations, making them suitable for neuromorphic applications. A critical
consideration for artificial neural networks (ANNs) is achieving linear and
symmetric plasticity (weight updates) during training, as this directly affects
accuracy and efficiency. This study uses finite element modeling to explore
EDLTs as artificial synapses in ANNs and investigates the underlying mechanisms
behind the nonlinear plasticity observed experimentally in previous studies. By
solving modified Poisson-Nernst-Planck (mPNP) equations, we examined ion
dynamics within an EDL capacitor & their effects on plasticity, revealing that
the rates of EDL formation and dissipation are concentration-dependent.
Fixed-magnitude pulse inputs result in decreased formation & increased
dissipation rates, leading to nonlinear weight updates and limits the number of
accessible states and operating range of devices. To address this, we developed
a predictive linear ionic weight update solver (LIWUS) in Python to predict
voltage pulse inputs that achieve linear plasticity. We then evaluated an ANN
with linear and nonlinear weight updates on the MNIST classification task. The
LIWUS-provided linear weight updates required 19% fewer epochs in training and
validation than the network with nonlinear weight updates to reach optimal
performance. It achieved a 97.6% recognition accuracy, 1.5-4.2% higher than
with nonlinear updates and a low standard deviation of 0.02%. The network model
is amenable to future spiking neural network applications and the performance
improvements with linear weight update is expected to increase for complex
networks. |
---|---|
DOI: | 10.48550/arxiv.2410.08978 |