Low Precision Constant Parameter CNN on FPGA
We report FPGA implementation results of low precision CNN convolution layers optimized for sparse and constant parameters. We describe techniques that amortizes the cost of common factor multiplication and automatically leverage dense hand tuned LUT structures. We apply this method to corner case r...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We report FPGA implementation results of low precision CNN convolution layers
optimized for sparse and constant parameters. We describe techniques that
amortizes the cost of common factor multiplication and automatically leverage
dense hand tuned LUT structures. We apply this method to corner case residual
blocks of Resnet on a sparse Resnet50 model to assess achievable utilization
and frequency and demonstrate an effective performance of 131 and 23 TOP/chip
for the corner case blocks. The projected performance on a multichip persistent
implementation of all Resnet50 convolution layers is 10k im/s/chip at batch
size 2. This is 1.37x higher than V100 GPU upper bound at the same batch size
after normalizing for sparsity. |
---|---|
DOI: | 10.48550/arxiv.1901.04969 |