International Conference on Mechanical Engineering, Automation and Control Systems 2016 | |
Implementation of 14 bits floating point numbers of calculating units for neural network hardware development | |
机械制造;无线电电子学;计算机科学 | |
Zoev, I.V.^1 ; Beresnev, A.P.^1 ; Mytsko, E.A.^1 ; Malchukov, A.N.^1 | |
Tomsk Polytechnic University, Lenina Ave., 30, Tomsk | |
634050, Russia^1 | |
关键词: Environment analysis; Floating point multiplication; Floating point numbers; Floating points; Modern architectures; Neural network hardware; Optimum value; Subtractor; | |
Others : https://iopscience.iop.org/article/10.1088/1757-899X/177/1/012044/pdf DOI : 10.1088/1757-899X/177/1/012044 |
|
学科分类:计算机科学(综合) | |
来源: IOP | |
【 摘 要 】
An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
Implementation of 14 bits floating point numbers of calculating units for neural network hardware development | 327KB | download |