IMBUE: In-Memory Boolean-to-CUrrent Inference ArchitecturE for Tsetlin Machines
In-memory computing for Machine Learning (ML) applications remedies the von Neumann bottlenecks by organizing computation to exploit parallelism and locality. Non-volatile memory devices such as Resistive RAM (ReRAM) offer integrated switching and storage capabilities showing promising performance f...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In-memory computing for Machine Learning (ML) applications remedies the von
Neumann bottlenecks by organizing computation to exploit parallelism and
locality. Non-volatile memory devices such as Resistive RAM (ReRAM) offer
integrated switching and storage capabilities showing promising performance for
ML applications. However, ReRAM devices have design challenges, such as
non-linear digital-analog conversion and circuit overheads. This paper proposes
an In-Memory Boolean-to-Current Inference Architecture (IMBUE) that uses
ReRAM-transistor cells to eliminate the need for such conversions. IMBUE
processes Boolean feature inputs expressed as digital voltages and generates
parallel current paths based on resistive memory states. The proportional
column current is then translated back to the Boolean domain for further
digital processing. The IMBUE architecture is inspired by the Tsetlin Machine
(TM), an emerging ML algorithm based on intrinsically Boolean logic. The IMBUE
architecture demonstrates significant performance improvements over binarized
convolutional neural networks and digital TM in-memory implementations,
achieving up to a 12.99x and 5.28x increase, respectively. |
---|---|
DOI: | 10.48550/arxiv.2305.12914 |