PROPAGATING ATTENTION INFORMATION IN EFFICIENT MACHINE LEARNING MODELS
Certain aspects of the present disclosure provide techniques and apparatus for improved attention-based machine learning. A first attention propagation output is generated using a first transformer block of a plurality of transformer blocks, this generation including processing input data for the fi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Certain aspects of the present disclosure provide techniques and apparatus for improved attention-based machine learning. A first attention propagation output is generated using a first transformer block of a plurality of transformer blocks, this generation including processing input data for the first transformer block using a first self-attention sub-block of the first transformer block. The first attention propagation output is propagated to a second transformer block of the plurality of transformer blocks. An output for the second transformer block is generated, this generation including generating output features for the second transformer block based on the first attention propagation output. |
---|