OctCNN: A High Throughput FPGA Accelerator for CNNs using Octave Convolution Algorithm
With the rapid development of convolutional neural networks (CNNs), FPGAs have become one of the most attractive candidates for deploying CNNs. However, previous FPGA solutions based on the traditional convolution are still limited by computational power. In this article, we introduce the octave con...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computers 2022-01, Vol.71 (8), p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the rapid development of convolutional neural networks (CNNs), FPGAs have become one of the most attractive candidates for deploying CNNs. However, previous FPGA solutions based on the traditional convolution are still limited by computational power. In this article, we introduce the octave convolution (OctConv) into the CNN accelerator design for the first time to improve the hardware acceleration efficiency and design a dedicated OctPU for mapping OctConv to FPGAs, which employs a parallel dataflow pattern to exploit the parallelism of OctConv. Then, we present a novel and scalable architecture that dynamically combines the inter-layer pipelined structure and multi-layer reuse structure. Meanwhile, to obtain the optimized solution, we build a multidimensional performance and resource analysis model and a two-stage search algorithm based on greedy and heuristic algorithms. We evaluate our proposal by implementing VGG16 and ResNet50 on the Xilinx VU9P FPGA. Experimental results show that our prototypes can achieve an average of 3321 GOP/s for the convolutional layers for VGG16 and 2873 GOP/s for the overall ResNet50 using OctConv. Compared to previous works based on the traditional convolution, our prototypes own a 1.72 to 2.33 speedup in throughput and a 2.01 to 5.18 improvement in computational density. Our design also presents an excellent compromise performance and generalization |
---|---|
ISSN: | 0018-9340 1557-9956 |
DOI: | 10.1109/TC.2021.3110413 |