|
ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 2. Vol. 31. 2025
DOI: 10.17587/it.31.87-92
K. R. Kharrasov, Assistant, M. S. Moseva, PhD, Senior Lecturer, M. G. Gorodnichev, PhD, Assistant Professor, Moscow Technical University of Communication and Informatics
Study of Neural Network Robustness in the Task of Pattern Recognition
The problem of stable pattern recognition in an image is considered. Types and types of attacks on machine learning systems and methods of defense against them are discussed. An experiment with the application of the described approach of robust image recognition to adversarial attacks is carried out and the reliability of conventional and robust neural network classifiers is compared on the basis of the resulting metrics.
Keywords: pattern recognition, robustness, adversarial attack, adversarial training
P. 87-92
Full text on eLIBRARY
References
- Goodfellow I., Shlens J., Szegedy C. Explaining and Harnessing Adversarial Examples, arXiv: 1412.6572.
- Goodfellow I., Yoshua B. Deep learning, Cambridge, Massachusetts, MIT Press, 2016, pp. 180—184.
- The MNIST dataset of handwritten digits, available at: https://www.kaggle.com/datasets/hojjatk/mnist-dataset)
- Goodfellow I., Warde-Farley D., Mirza M., Courville A., Yoshua B. Maxout Networks, arXiv: 1302.4389.
- Moosavi-Dezfooli S.-M., Fawzi A., Frossard P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). P. 2574—2582.
- Su J., Vargas D. V., Kouichi S. One pixel attack for fooling deep neural networks, arXiv: 1710.08864.
- Li H., Namiot D. A Survey of Adversarial Attacks and Defenses for image data on Deep Learning, International Journal of Open Information Technologies, 2022, vol. 10, no. 5, pp. 9—16.
- Namiot D., Ilyushin E., Chizhov I. The rationale for working on robust machine learning, International Journal of Open Information Technologies, 2021, vol. 9, no. 11, pp. 68—74.
- Namiot D., Ilyushin E., Chizhov I. Artificial intelligence and cybersecurity, International Journal of Open Information Technologies, 2022, vol. 10, no. 9, pp. 135—147.
- Schott L., Rauber J., Bethge M., Brendel W. Towards the first adversarially robust neural network model on MNIST, arXiv: 1805.09190.
- Song Y., Kim T., Nowozin S., Ermon S., Kushman N. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples, arXiv: 1710.10766.
- Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards Deep Learning Models Resistant to Adversarial Attacks, arXiv: 1706.06083.
To the contents
|
|