Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397

Issue N4 2025 year

DOI: 10.17587/prin.16.190-198
Adversarial Testing of Image Segmentation Models
E. A. Vorobyev, Student, voro6yov.egor@gmail.com, D. E. Namiot, Dr. Sci., Leading Researcher, dnamiot@gmail.com, Lomonosov Moscow State University, Moscow, 119991, Russian Federation
Corresponding author: Egor A. Vorobyev, Student, Lomonosov Moscow State University, Moscow, 119991, Russian Federation, E-mail: voro6yov.egor@gmail.com
Received on December 04, 2024
Accepted on January 13, 2025

Image segmentation is one of the most frequently addressed tasks in image processing. However, image segmentation models, like any other deep learning models, are susceptible to adversarial attacks — special data modifications at different stages of the standard machine learning pipeline that prevent the model from functioning correctly and pose a problematic issue for practical applications of deep learning models. This paper examines so-called evasion attacks, where input data is modified during the execution (inference) stage. The article presents an original tool — Segmentation Robustness Framework (SRF), designed for testing the robustness of segmentation models against digital adversarial attacks.

Keywords: machine learning, deep learning, image segmentation, adversarial attacks
pp. 190—198
For citation:
Vorobyev E. A., Namiot D. E. Adversarial Testing of Image Segmentation Models, Programmnaya Ingeneria, 2025, vol. 16, no. 4, pp. 190—198. DOI: 10.17587/prin.16.190-198.
References:
  1. Minaee S., Boykov Y., Porikli F. et al. Image segmentation using deep learning: A survey, IEEE transactions on pattern analysis and machine intelligence, 2021, vol. 44, no. 7, pp. 3523—3542. DOI: 10.1109/TPAMI.2021.3059968.
  2. Vorob'ev E. A. Analiz sostjazatel'nyh atak na sistemy segmentacii izobrazhenij, International Journal of Open Information Technologies, 2024, vol. 12, no. 10, pp. 1—25 (in Russian).
  3. Namiot D. E. Shemy atak na modeli mashinnogo obuchenija, International Journal of Open Information Technologies, 2023, vol. 11, no. 5, pp. 68—86 (in Russian).
  4. Chehonina E. A., Kostjumov V. V. Obzor sostjazatel'nyh atak i metodov zashhity dlja detektorov ob”ektov, International Journal of Open Information Technologies, 2023, vol. 11, no. 7, pp. 11—20 (in Russian).
  5. Kostjumov V. V. Obzor i sistematizacija atak ukloneniem na modeli komp'juternogo zrenija, International Journal of Open Information Technologies, 2022, vol. 10, no. 10, pp. 11—20 (in Russian).
  6. Namiot D. E., Il'jushin E. A. Doverennye platformy iskusst-vennogo intellekta: sertifikacija i audit, International Journal of Open Information Technologies, 2024, vol. 12, no. 1, pp. 43—60 (in Russian).
  7. Kumar R. S. S., Nystrom M., Lambert J. et al Adversarial machine learning-industry perspectives, 2020 IEEE security and privacy workshops (SPW), IEEE, 2020, pp. 69—75. DOI: 10.1109/ SPW50608.2020.00028.
  8. Tsai M. J., Lin P. Y., Lee M. E. Adversarial attacks on medical image classification, Cancers, 2023, vol. 15, no. 17, pp. 4228. DOI: 10.3390/cancers15174228.
  9. Poryvaj M. V. Sravnitel'noe issledovanie metodov estest-vennoj augmentacii izobrazhenij, International Journal of Open Information Technologies, 2024, vol. 12, no. 10, pp. 26—33 (in Russian).
  10. Goodfellow I. J., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. 2014. DOI: 10.48550/arXiv.1412.6572.
  11. Guesmi A., Hanif M. A., Ouni B., Shafique M. Physical adversarial attacks for camera-based smart systems: Current trends, categorization, applications, research challenges, and future outlook. arXiv:2308.06173. DOI: 10.48550/arXiv.2308.06173.
  12. Tramer F., Kurakin A., Papernot N. et al. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204. 2017. DOI: 10.48550/arXiv.1705.07204.
  13. Kim H. Torchattacks: A pytorch repository for adversarial attacks. arXiv 2020. arXiv preprint arXiv:2010.01950.
  14. SRF, available at: https://github.com/wntic/segmentation-robustness-framework (date of access 03.12.2024).
  15. segmentation_models.pytorch (SMP), available at: https://github.com/qubvel-org/segmentation_models.pytorch (date of access 04.01.2025).
  16. Magisterskaja programma "Iskusstvennyj intellekt v kiber-bezopasnosti" (FGOS). 2024. October, available at: https://cs.msu.ru/node/3732 (date of access 02.12.2024) (in Russian).
  17. Namiot D. E., Il'jushin E. A., Chizhov I. V. Iskusstvennyj intellekt i kiberbezopasnost', International Journal of Open Information Technologies, 2022, vol. 10, no. 9, pp. 135 — 147 (in Russian).