Jiangwei Shang, Zhan Zhang, Kun Zhang, Chuanyou Li, Lei Qian, Hongwei Liu

An algorithm/hardware co‐optimized method to accelerate CNNs with compressed convolutional weights on FPGA

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Computer Science Applications
  • Theoretical Computer Science
  • Software

SummaryConvolutional neural networks (CNNs) have shown remarkable advantages in a wide range of domains at the expense of huge parameters and computations. Modern CNNs still tend to be more complex and larger to achieve better inference accuracy. However, the complex and large structures of CNNs could slow down the inference speed. Recently, Compressing the convolutional weights to be sparse by pruning the unimportant parameters has been demonstrated as an efficient way to reduce the computations of CNNs. On the other hand, field‐programmable gate arrays (FPGAs) have been a popular hardware platform to accelerate CNN inference. In this paper, we propose an algorithm/hardware co‐optimized method for accelerating CNN inference on FPGAs. For the algorithm, we take advantage of unstructured and structured parameter sparsifying methods to achieve high sparsity and keep the regularity of convolutional weights. Correspondingly, hardware‐friendly index representations of sparse convolutional weights are proposed. For the hardware architecture, we propose row‐wise input‐stationary dataflow, which is tightly coupled with the algorithm. A row‐wise computing engine (RConv Engine) is proposed, which is based on the dataflow. Inside the RConv Engine, the scalar‐vector structure is applied to implement the basic processing elements (PEs). To flexibly calculate the feature map with various sizes, the PEs are organized in a 2D structure with two work modes. The experimental results demonstrate that our co‐optimized method implements high sparsity of convolutional weights, and the computing engine achieves high computation efficiency. Compared with other accelerators, our co‐optimized method implements a 10.9 speedup on FPS at most with the highest sparsity of convolutional weights and negligible accuracy loss.

Need a simple solution for managing your BibTeX entries? Explore CiteDrive!

  • Web-based, modern reference management
  • Collaborate and share with fellow researchers
  • Integration with Overleaf
  • Comprehensive BibTeX/BibLaTeX support
  • Save articles and websites directly from your browser
  • Search for new articles from a database of tens of millions of references
Try out CiteDrive

More from our Archive