Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397

Issue N10 2025 year

DOI: 10.17587/prin.16.507-516
UVM Verification Coverage Achievement Acceleration using Gradient Boosting and Hybrid Methods
A. D. Manerkin, Junior Researcher, manerkin@cs.niisi.ras.ru, Scientific Research Institute for System Analysis of the National Research Centre "Kurchatov Institute", Moscow, 117218, Russian Federation
Corresponding author: Alexey D. Manerkin, Junior Researcher, Scientific Research Institute for System Analysis of the National Research Centre "Kurchatov Institute" , Moscow, 117218, Russian Federation E-mail: manerkin@cs.niisi.ras.ru
Received on April 18, 2025
Accepted on May 20, 2025

The article discusses the applicability of machine learning to accelerate the functional verification of microprocessors. This study conducts a comparative analysis of major gradient boosting implementations (XGBoost, Light-GBM, CatBoost), one of the most widely used machine learning methods, in achieving code coverage acceleration through transaction filtering. Based on the results, hybrid models combining the strengths of these implementations are proposed. A comparative evaluation of hybrid methods—blending, stacking, and gradient transfer—is provided, identifying the most efficient approach for maximum coverage achievement with fewer transactions in UVM-based functional verification.

Keywords: functional verification, UVM, code coverage, machine learning, gradient boosting, blending, stacking, gradient transfer
pp. 507—516
For citation:
Manerkin A. D. UVM Verification Coverage Achievement Acceleration using Gradient Boosting and Hybrid Methods, Programmnaya Ingeneria, 2025, vol. 16, no. 10, pp. 507—516. DOI: 10.17587/prin.16.507-516 (in Russian).
References:
  1. Saes L. Unit test generation using machine learning, Master thesis, 2018, University of Amsterdam, 56 p.
  2. Sharif A., Marijan D., Liaaen M. DeepOrder: Deep Learning for Test Case Prioritization in Continuous Integration Testing, 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME), Luxembourg, 2021, pp. 525—534. DOI: 10.1109/ICSME52107.2021.00053.
  3. Allagi S., Rachh R. Analysis of Network log data using Machine Learning, 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), Bombay, India, 2019, pp. 1—3. DOI: 10.1109/I2CT45611.2019.9033737.
  4. Manerkin A. D., Grevtsev N. A., Chibisov P. A. Machine learning methods application during functional verification for the required coverage achievement acceleration, Sbornik tezisov Rossiysk-ogo foruma "Mikroelektronika 2024", 10-ya Nauchnaya konferentsiya "EKB i mikroelektronnye moduli", Sirius University of Science and Technology, 2024, pp. 135—136 (in Russian).
  5. Akiba T., Sano S., Yanase T. et al. Optuna: A next-generation hyperparameter optimization framework, 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, United States, 2019. DOI: 10.48550/arXiv.1907.10902.
  6. Chen T., Guestrin C. XGBoost: A Scalable Tree Boosting System, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785—794. DOI: 10.1145/2939672.2939785.
  7. Ke G., Meng Q., Finley T. et al. Lightgbm: A Highly Efficient Gradient Boosting Decision Tree, Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 3146—3154.
  8. Prokhorenkova L., Gusev G., Vorobev A. et al. CatBoost: unbiased boosting with categorical features, Advances in Neural Information Processing Systems, 2018, vol. 31, pp. 6638—6648. DOI: 10.48550/arXiv.1706.09516.
  9. Khyani D., Jakkula S., Gowda S. et al. An Interpretation of Stacking and Blending Approach in Machine Learning, International Research Journal of Engineering and Technology (IRJET), 2021, vol. 8, no. 7, pp. 3117—3120.
  10. Wolpert D. H. Stacked generalization, Neural Networks, 1992, vol. 5, no. 2, pp. 241—259. DOI: 10.1016/s0893-6080(05) 80023-1.
  11. Friedman J. H. Greedy Function Approximation: A Gradient Boosting Machine, The Annals of Statistics, 2001, vol. 29, no. 5, pp. 1189—1232. DOI: 10.1214/aos/1013203451.
  12. Breiman L. Bagging predictors, Machine Learning, 1994, vol. 24, no. 2, pp. 123—140. DOI: 10.1007/bf00058655.