main| new issue| archive| editorial board| for the authors| publishing house|
Ðóññêèé
Main page
New issue
Archive of articles
Editorial board
For the authors
Publishing house

 

 


ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 2. Vol. 29. 2023

DOI: 10.17587/it.29.84-90

A. E. Trubin, Cand. Sci. (Econ.), Associate Professor, Director of the Digital Economy Department,
A. V. Batishchev, Cand. Sci. (Econ.), Associate Professor, Head of the Department,
A. N. Aleksahin, Cand. Sci. (Ped.), Head of the Department,
Synergy University, Moscow, Russian Federation, A. E. Zubanova, Master Student, A. A. Morozov, Master Student, Orel State University named after I. S. Turgenev, Orel, Russian Federation

Building an Optimal Convolutional Neural Network Model for Solving Complex Character Recognition Problems

The purpose of the study is aimed at developing a lighter architecture of a convolutional neural network model that will cope with the narrowly focused task of recognizing complex characters better than large-scale and well-known ones. As the source data, the characters of the Japanese language are used, consisting of two syllabic alphabets: hiragana and katakana, which are the most complex, since their writing style is characterized by a large number of features and similarity of characters, which greatly complicates the task of their classification and recognition. The author's model of a convolutional neural network is designed in the article, consisting of four convolutional layers, three layers of subdiscretization and three layers of exclusion. The developed model was compared with one of the most popular models of the EfficientNetBO neural network from the point of view of their architecture and the results of work on the same data. To implement its own convolutional neural network model, the classic Keras + Tensorflow bundle was used, since these libraries provide the most convenient tools for working in the field of machine learning. The result of the conducted research is the developed technology of fast and accurate recognition of complex symbols based on a convolutional neural network, which can become the basis of a software product in the field of computer vision.
Keywords: neural networks, convolutional neural network model, computer vision, neural network architecture, machine learning

P. 84–90

References

  1. Bobyleva E. A., Rodionov A. V. Research of existing approaches to recognition of Japanese hieroglyphic characters, Innovacionnaja nauka, 2017, vol. 3, no. 4, pp. 28—32 (in Russian).
  2. Bredihin A. I. Training algorithms for convolutional neural networks, Vestnik JuGU, 2019, no. 1(52), pp. 41—54.
  3. Zakharov A. V., Zubanova A. E., Morozov A. A., Novikov S. V., Trubin A. E., Filippskih S. L., Shilenok A. O. Use of neural network technologies in the digital economy of Russia to identify the dependencies of socio-economic development in the regions, Informatsionnye sistemy i tekhnologii, 2021, no. 6 (128), pp. 21—30 (in Russian).
  4. Zubanova A., Morozov A., Trubin A., Aleksahin A., Novikov S. Synergy of econometric approach and use of neural networks to de­termine factors of provision of transport and logistics infrastructure in regions of Russia, Prikladnaya informatika, 2022, vol. 17, no. 1, pp.5—18 (in Russian), DOI: 10.37791/2687-0649-2022-17-1-5-18
  5. Trubin A., Morozov A., Zubanova A., Ozheredov V., Korepanova V. The method of preprocessing machine learning data for solving computer vision problems, Prikladnaya informatika, 2022, vol. 17, no. 4, pp. 47—56 (in Russian), DOI: 10.37791/2687-0649-2022-17-4-47-56
  6. Haykin S. Neural networks: Full course, Moscow, Dialektika Publ., 2019, 1104 p.
  7. Bottou L., Curtis F. E., Nocedal J. Optimization methods for large-scale machine learning, SIAM Review, 2018, vol. 60, no. 2, pp. 223—311.
  8. Bunke H., Wang P. S. P. Handbook of Character Recognition and Document Image Analysis, USA, World Scientific Publishing Company, 1997, 883 p.
  9. Heaton J., Goodfellow I., Bengio Y., Courville A. Deep learning, Genetic Programming and Evolvable Machines, 2017, vol. 1—2, no. 19, pp. 305—307.
  10. Hinton G., Srivastava N., Swersky K. Neural networks for machine learning. Overview of mini-batch gradient descent, 2012, Cited on, vol. 14, no. 8, pp. 4—6.
  11. Hubel D. H., Wiesel T. N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex, Journal of Physiology, 1962, no. 160 (1), pp. 106—154.
  12. Ioffe S., Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift, Proceedings of the 32nd International Conference on Machine Learning, 2015, pp. 448—456.
  13. Krizhevsky A., Sutskever I., Hinton G. E. ImageNet clas­sification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 2012, no. 25 (2), pp. 84—90.

To the contents