Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397
Issue N1 2020 year
Approaches to improving the performance of neural network computing are considered. The construction of neural network calculators for the traditional neural network paradigm by increasing the number of computational cores leads to increased energy consumption and heat dissipation problems. The root of the problem is the need to perform a multiplication operation on multi-bit floating-point operands. A possible way to overcome this negative circumstance is to reduce the number of operations performed through the use of tensorized neural networks or transition to binary neural networks and networks with significantly low-bit weights and inputs. The new neural network paradigm of neuromorphic neural networks also does not use multiplication operations on multi-bit floating-point operands. Thus, the main way to improve the performance of neural network computing is the development of new algorithms. The architecture of neural network computers should provide fast access of many computational cores to local blocks of distributed single-level memory, which is typical for high-performance parallel systems, and is not specific to neural network computing.