main| new issue| archive| editorial board| for the authors| publishing house|
Ðóññêèé
Main page
New issue
Archive of articles
Editorial board
For the authors
Publishing house

 

 


ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 2. Vol. 31. 2025

DOI: 10.17587/it.31.87-92

K. R. Kharrasov, Assistant, M. S. Moseva, PhD, Senior Lecturer, M. G. Gorodnichev, PhD, Assistant Professor, Moscow Technical University of Communication and Informatics

Study of Neural Network Robustness in the Task of Pattern Recognition

The problem of stable pattern recognition in an image is considered. Types and types of attacks on machine learning systems and methods of defense against them are discussed. An experiment with the application of the described approach of robust image recognition to adversarial attacks is carried out and the reliability of conventional and robust neural network classifiers is compared on the basis of the resulting metrics.
Keywords: pattern recognition, robustness, adversarial attack, adversarial training

P. 87-92

Full text on eLIBRARY

References

  1. Goodfellow I., Shlens J., Szegedy C. Explaining and Harnessing Adversarial Examples, arXiv: 1412.6572.
  2. Goodfellow I., Yoshua B. Deep learning, Cambridge, Massachusetts, MIT Press, 2016, pp. 180—184.
  3. The MNIST dataset of handwritten digits, available at: https://www.kaggle.com/datasets/hojjatk/mnist-dataset)
  4. Goodfellow I., Warde-Farley D., Mirza M., Courville A., Yoshua B. Maxout Networks, arXiv: 1302.4389.
  5. Moosavi-Dezfooli S.-M., Fawzi A., Frossard P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). P. 2574—2582.
  6. Su J., Vargas D. V., Kouichi S. One pixel attack for fooling deep neural networks, arXiv: 1710.08864.
  7. Li H., Namiot D. A Survey of Adversarial Attacks and Defenses for image data on Deep Learning, International Journal of Open Information Technologies, 2022, vol. 10, no. 5, pp. 9—16.
  8. Namiot D., Ilyushin E., Chizhov I. The rationale for working on robust machine learning, International Journal of Open Information Technologies, 2021, vol. 9, no. 11, pp. 68—74.
  9. Namiot D., Ilyushin E., Chizhov I. Artificial intelligence and cybersecurity, International Journal of Open Information Technologies, 2022, vol. 10, no. 9, pp. 135—147.
  10. Schott L., Rauber J., Bethge M., Brendel W. Towards the first adversarially robust neural network model on MNIST, arXiv: 1805.09190.
  11. Song Y., Kim T., Nowozin S., Ermon S., Kushman N. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples, arXiv: 1710.10766.
  12. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards Deep Learning Models Resistant to Adversarial Attacks, arXiv: 1706.06083.

To the contents