EFFICIENT SOFTMAX COMPUTATION WITH NO LOSS IN ACCURACY

A modified 2-pass version of the SoftMax operation can be implemented to address reduce computational cost without loss of accuracy, in particular for deep learning neural networks such as transformer-based neural networks and large language models (LLMs). The first pass is modified to include two s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Budhkar, Prerna, Priya, Jainaveen Sundaram, Srinivasa, Srivatsa Rangachar, Karnik, Tanay, Chua, Vui Seng
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A modified 2-pass version of the SoftMax operation can be implemented to address reduce computational cost without loss of accuracy, in particular for deep learning neural networks such as transformer-based neural networks and large language models (LLMs). The first pass is modified to include two scalar operations at the end. At the end of the first pass, a first scalar operation is performed to calculate a logarithm of the denominator, and a second scalar operation is performed to calculate an operand value based on a sum of the logarithm of the denominator and the maximum value. The second pass is modified to perform addition and exponentiation. In the second pass, an element of an input tensor is subtracted by the operand value to obtain an exponent, and a base is raised to the exponent. The second pass avoids divisions.