Stochastic computing for low-power and high-speed deep learning on FPGA

Conference Publication ResearchOnline@JCU
Lammie, Corey;Azghadi, Mostafa Rahimi
Abstract

Stochastic Computing (SC) presents a low-cost and low-power alternative to conventional binary computing. In SC, continuous values are represented by stochastically generated bit streams. By performing simple hardware-friendly bit-wise operations on these streams, complex calculations can be realized very efficiently. However, the inherent randomness and approximation used in SC can result in undesirable computational errors. As Convolutional Neural Networks (CNNs) are inherently error-tolerant, SC could be embedded in them to gain higher speed and lower power without significant accuracy loss. In this paper, we propose using SC techniques to approximate multiplication operations on fixed-point weights and biases during training of CNNs. By employing such techniques, we demonstrate near state-of-the-art learning performance for the MNIST and CIFAR-10 datasets, while achieving significant resource and speed improvements when implementing the deep networks on a Field Programmable Gate Array (FPGA). For MNIST, we demonstrate that SC compared to conventional computing, will result in almost 3 times increase in learning speed with only 1.37% degradation in validation accuracy. Similarly, for CIFAR-10, training is accelerated 3.5 times with a degradation of 3.39%. We also show that our FPGA implementations of CNNs adopting stochastic multipliers consume over 17 times less power than their GPU counterparts.

Journal

N/A

Publication Name

Proceedings - IEEE International Symposium on Circuits and Systems

Volume

N/A

ISBN/ISSN

978-1-7281-0397-6

Edition

N/A

Issue

N/A

Pages Count

5

Location

Sapporo, Japan

Publisher

Institute of Electrical and Electronics Engineers

Publisher Url

N/A

Publisher Location

Piscataway, NJ, USA

Publish Date

N/A

Url

N/A

Date

N/A

EISSN

N/A

DOI

10.1109/ISCAS.2019.8702248