Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397

Issue N11 2025 year

DOI: 10.17587/prin.16.558-569
Evaluating Key Elements of Error Traces in Static Software Verification
V. O. Mordan, Research Associate, mordan@ispras.ru, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, 109004, Russian Federation, V. S. Mutilin, Leading Researcher, mutilin@ispras.ru, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, 109004, Russian Federation; Moscow Institute of Physics and Technology (State University), Dolgoprudny, Moscow Region, 141701, Russian Federation
Corresponding author: Vitalii O. Mordan, Research Associate, Ph.D. (Physics and Mathematics), Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, 109004, Russian Federation E-mail: mordan@ispras.ru
Received on May 29, 2025
Accepted on July 08, 2025

Software verification is a resource-intensive process that combines automatic program analysis with manual results analysis. While recent advancements in verification techniques and cloud-based infrastructures have significantly accelerated the automatic phase, manual analysis remains a critical bottleneck — particularly for large-scale software systems — due to the need for specialized developer expertise. Verification results typically include warnings about potential bugs, which must either be corrected or classified as false positives. In this work, we address this challenge by introducing a formal notion of thoroughness for error traces in the context of software verification. We define thoroughness as a measure of how comprehensively an error trace captures essential elements necessary for understanding and validating a reported issue. To evaluate this concept in practice, we apply it to the verification results produced by tools participating in the Software Verification Competition (SV-COMP). Our analysis demonstrates that the proposed metric is not only feasible but also informative in real-world verification scenarios. Furthermore, we argue that thoroughness can serve as a meaningful metric for comparing verification tools, identifying strengths and weaknesses in how they represent potential errors. It also aids in improving automated validation processes. In addition, our evaluation shows how the metric can be used to determine which specific elements must be present in an error trace to support effective visualization. By identifying these key elements, we aim to facilitate more efficient manual analysis and promote the development of more user-friendly verification tools.

Keywords: software verification, thoroughness, verification results, validation, error trace
pp. 558—569
For citation:
Mordan V. O., Mutilin V. S. Evaluating Key Elements of Error Traces in Static Software Verification, Programmnaya Ingeneria, 2025, vol. 16, no. 11, pp. 558—569. DOI: 10.17587/prin.16.558-569. (in Russian).
References:
  1. Mordan V., Novikov E. Minimizing the Number of Static Verifier Traces to Reduce Time for Finding Bugs in Linux Kernel Modules, SYRCoSE'2014, 2014. DOI: 10.15514/SYRCOSE-2014-8-5.
  2. Beyer D. Software Verification: 10th Comparative Evaluation (SV-COMP 2021), Proc. TACAS 2021, 2021, vol. 2, pp. 401—422. DOI: 10.1007/978-3-030-72013-1_24.
  3. Beyer D. Progress on Software Verification: SV-COMP 2022, Proc. TACAS 2022, 2022, vol. 2, pp. 375—402. DOI: 10.1007/978-3-030-99527-0_20.
  4. Beyer D. Competition on Software Verification and Witness Validation: SV-COMP 2023, Proc. TACAS 2023, 2023, vol. 2, pp. 495—522. DOI: 10.1007/978-3-031-30820-8_29.
  5. Novikov E., Zakharov I. Verification of Operating System Monolithic Kernels Without Extensions, Leveraging Applications of Formal Methods, Verification and Validation. ISoLA 2018, 2018, vol. IV, pp. 230—248. DOI: 10.1007/978-3-030-03427-6_19.
  6. Beyer D., Dangl M., Dietsch D. et al. Verification witnesses, Proc. ACM 2022, 2022, vol. 31, no. 4, article 57. DOI: 10.1145/3477579.
  7. Beyer D., Dangl M., Dietsch D., Heizmann M. Correctness witnesses: exchanging verification results between verifiers, Proc. 24th ACM SIGSOFT Int. Symp. Foundations of Software Engineering, 2016, pp. 326—337. DOI: 10.1145/2950290.2950351.
  8. Beyer D., Dangl M., Dietsch D. et al. Witness validation and stepwise testification across software verifiers, Proc. 10th Joint Meeting on Foundations of Software Engineering, 2015, pp. 721—733. DOI: 10.1145/2786805.2786867.
  9. Beyer D., Keremoglu M. E. CPAchecker: A Tool for Configurable Software Verification, Proc. Computer Aided Verification, Berlin, Springer, 2011, pp. 184—190. DOI: 10.1007/978-3-642-22110-1_16.
  10. Heizmann M., Christ J., Dietsch D. et al. Ultimate Automizer with SMTInterpol (Competition Contribution), Proc. TACAS 2013, 2013, pp. 641—643. DOI: 10.1007/978-3-642-36742-7_53.
  11. Khoroshilov A., Mutilin V., Novikov E. et al. Towards an Open Framework for C Verification Tools Benchmarking, Perspectives of Systems Informatics, Berlin, Springer, 2012, pp. 179—192. DOI: 10.1007/978-3-642-29709-0_17.
  12. Beyer D. Verification Witnesses from Verification Tools (SV-COMP 2023), Zenodo, 2023. DOI: 10.5281/zenodo.7627791.
  13. Mordan V. Witness Structure and Thoroughness Analysis for SV-COMP'2023, Zenodo, 2023. DOI: 10.5281/zenodo.15808896 (in Russian).
  14. Muller P., West M., MacEachern S. Bayesian Models for Non-linear Autoregressions, Journal of Time Series Analysis, 1997, vol. 18, no. 6, pp. 593—614. DOI: 10.1111/1467-9892.00070.
  15. Aljaafari F., Shmarov F., Manino E. et al. EBF 4.2: Black-Box Cooperative Verification for Concurrent Programs (Competi­tion Contribution), Proc. TACAS 2023, 2023, vol. 2, pp. 541—546. DOI: 10.1007/978-3-031-30820-8_33.
  16. Leeson W., Dwyer M. Graves-CPA: A Graph-Attention Verifier Selector (Competition Contribution), Proc. TACAS 2022, 2022, vol. 2, pp. 440—445. DOI: 10.1007/978-3-030-99527-0_28.
  17. Richter C., Wehrheim H. PeSCo: Predicting Sequential Combinations of Verifiers (Competition Contribution), Proc. TACAS 2019, 2019, vol. 3, pp. 229—233. DOI: 10.1007/978-3-030-17502-3_19.
  18. Scott R., Dockins R., Ravitch T., Tomb A. Crux: Symbolic execution meets SMT-based verification (Competition Contribution), Zenodo, 2022. DOI: 10.5281/zenodo.6147218.
  19. Kettl M., Lemberger T. The Static Analyzer Infer in SV-COMP (Competition Contribution), Proc. TACAS 2022, 2022, vol. 2, pp. 451—456. DOI: 10.1007/978-3-030-99527-0_30.
  20. Kroning D., Tautschnig M. CBMC: C Bounded Model Checker (Competition Contribution), Proc. TACAS 2014, 2014, pp. 389—391. DOI: 10.1007/978-3-642-54862-8_26.
  21. Ayaziova P., Strejcek J. Symbiotic-Witch 2: More Efficient Algorithm and Witness Refutation (Competition Contribution), Proc. TACAS 2023, 2023, vol. 2, pp. 523—528. DOI: 10.1007/978-3-031-30820-8_30.
  22. Monat R., Ouadjaout A., Mine A. Mopsa-C: Modular Domains and Relational Abstract Interpretation for C Programs (Competition Contribution), Proc. TACAS 2023, 2023, vol. 2, pp. 565—570. DOI: 10.1007/978-3-031-30820-8_37.