Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397

Issue N1 2020 year

DOI: 10.17587/prin.11.21-25
Approaches to Improving the Performance of Neural Network Computing
V. V. Korneev, korv@rdi-kvant.ru, Research and Development Institute Kvant, Moscow, 125438, Russian Federation
Corresponding author: Korneev Victor V., Principal Researcher, Research and Development Institute Kvant, Moscow, 125438, Russian Federation, E-mail: korv@rdi-kvant.ru
Received on October 22, 2019
Accepted on November 09, 2019

Approaches to improving the performance of neural network computing are considered. The construction of neural network calculators for the traditional neural network paradigm by increasing the number of computational cores leads to increased energy consumption and heat dissipation problems. The root of the problem is the need to perform a multiplication operation on multi-bit floating-point operands. A possible way to overcome this negative circumstance is to reduce the number of operations performed through the use of tensorized neural networks or transition to binary neural networks and networks with significantly low-bit weights and inputs. The new neural network paradigm of neuromorphic neural networks also does not use multiplication operations on multi-bit floating-point operands. Thus, the main way to improve the performance of neural network computing is the development of new algorithms. The architecture of neural network computers should provide fast access of many computational cores to local blocks of distributed single-level memory, which is typical for high-performance parallel systems, and is not specific to neural network computing.

Keywords: neural network computing padigms, binary neural networks, tensorizing neural networks, neuromorphic neural networks
pp. 21–25
For citation:
Korneev V. V. Approaches to Improving the Performance of Neural Network Computing, Programmnaya Ingeneria, 2020, vol. 11, no. 1, pp. 21—25.