A Fully Quantized Training Accelerator for Diffusion Network With Tensor Type & Noise Strength Aware Precision Scheduling
Fine-grained mixed-precision fully-quantized methods have great potential to accelerate neural network training, but existing methods exhibit large accuracy loss for more complex models such as diffusion networks. This brief introduces a fully-quantized training accelerator for diffusion networks. I...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems. II, Express briefs Express briefs, 2024-12, Vol.71 (12), p.4994-4998 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Fine-grained mixed-precision fully-quantized methods have great potential to accelerate neural network training, but existing methods exhibit large accuracy loss for more complex models such as diffusion networks. This brief introduces a fully-quantized training accelerator for diffusion networks. It features a novel training framework with tensor-type- and noise-strength-aware precision scheduling to optimize bit-width allocation. The processing cluster design enables dynamical switching bit-width mappings for model weights, allows concurrent processing in 4 different bit-widths, and incorporates a gradient square sum collection unit to minimize on-chip memory access. Experimental results show up to 2.4 \times training speedup and 81% operation bit-width overhead reduction compared to existing designs, with minimal impact on image generation quality. |
---|---|
ISSN: | 1549-7747 1558-3791 |
DOI: | 10.1109/TCSII.2024.3439319 |