A Comparative Study of Regression Testing Techniques: An Industry-Oriented Evaluation of Efficiency, Coverage and Cost

Authors

  • Trina Saha R. P. Shaha University, Narayanganj, Dhaka, Bangladesh
  • Md. Siam R. P. Shaha University, Narayanganj, Dhaka, Bangladesh

DOI:

https://doi.org/10.61453/joit.v2025no34

Keywords:

Regression Testing, Software Maintenance, Industrial Suitability, Selective Regression Testing, Agile CI/CD

Abstract

Regression testing is an essential component of software maintenance, aimed at ensuring that newly introduced changes do not negatively impact the existing functionality of a system. In today's fast-paced industrial environments, particularly those employing agile methodologies and Continuous Integration/Continuous Deployment (CI/CD) pipelines, the choice of regression testing technique significantly influences project timelines, resource allocation, and product quality. This research investigates and compares five widely adopted regression testing types: Corrective, Retest-All, Selective, Progressive, and Complete Regression Testing. This study's primary objective is to assess each technique's industrial suitability based on key evaluation metrics such as time efficiency, cost of execution, test coverage, automation/tool support, scalability, and risk of fault omission. We adopted a qualitative scoring methodology, grounded in a comprehensive review of several scholarly articles and industry reports. Each testing type was critically analyzed and benchmarked using a comparison matrix to highlight its strengths and limitations. The analysis reveals that while each regression testing method serves distinct use cases, Selective Regression Testing strikes the best balance between efficiency and coverage for modern industrial needs, particularly in projects with frequent releases and constrained testing budgets. The novelty of this study lies in its holistic, literature-backed comparative framework, explicitly tailored to the industry context aspect often overlooked in prior academic evaluations

References

Cheng, R., Wang, S., Jabbarvand, R., & Marinov, D. (2024). Revisiting test-case prioritization on long-running test suites. Proceedings of the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) (pp. 1–12). ACM. https://doi.org/10.1145/3650212.3680307

Das, S. (2025, January). CI/CD with Jenkins. Jenkins Blog. https://www.jenkins.io/blog/

Do, H., & Rothermel, G. (2006). On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Transactions on Software Engineering, 32(9), 733–752. https://doi.org/10.1109/TSE.2006.92

Do, H., Elbaum, S., & Rothermel, G. (2005). Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4), 405–435. https://doi.org/10.1007/s10664-005-3861-2

Do, H., Mirarab, S., Tahvildari, L., & Rothermel, G. (2010). The effects of time constraints on test case prioritization: A series of controlled experiments. IEEE Transactions on Software Engineering, 36(5), 593–617. https://doi.org/10.1109/TSE.2010.58

dos Santos, L. B. R., Felício, T. G., de Andrade, R. M. C., de Souza, É. F., de Mello, R. M., & Figueiredo, E. (2024). Performance regression testing initiatives: A systematic mapping study. Information and Software Technology, 170, 107233. https://doi.org/10.1016/j.infsof.2024.107641

Elbaum, S., Karre, S., & Rothermel, G. (2003). Improving web application testing with user session data. In Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) (pp. 49–60). ACM. https://doi.org/10.1109/ICSE.2003.1201188

Elbaum, S., Malishevsky, A., & Rothermel, G. (2002). Test case prioritization: A family of empirical studies. IEEE Transactions on Software Engineering, 28(2), 159–182. https://doi.org/10.1109/32.988497

Feldt, R., Torkar, R., Angelis, L., & Samuelsson, M. (2008). Towards individualized software engineering: Empirical studies should collect psychometrics. In Proceedings of the International Conference on Software Engineering (ICSE) (pp. 281–290). IEEE. https://doi.org/10.1145/1370114.1370127

Garousi, V., Felderer, M., & Hacaloğlu, T. (2016a). Software test maturity assessment: Literature review and industrial evaluation. Journal of Systems and Software, 117, 331–351. https://doi.org/10.1016/j.infsof.2017.01.001

Garousi, V., Özkan, R., & Betin-Can, A. (2018). Multi-objective regression test selection in practice: An empirical study in the defense software industry. Information and Software Technology, 103, 40–54. https://doi.org/10.1016/j.infsof.2018.06.007

Garousi, V., Petersen, K., & Ozkan, B. (2016b). Challenges and best practices in industry–academia collaborations in software engineering: A systematic literature review. Information and Software Technology, 79, 106–127. https://doi.org/10.1016/j.infsof.2016.07.006

Gruslu, V., Felderer, M., & Hacaloglu, T. (2017). Software test maturity assessment and test process improvement: A multivocal literature review. Information and Software Technology, 85, 16–33. https://doi.org/10.1016/j.infsof.2017.01.001

Gupta, P., Ivey, M., & Penix, J. (2011, June). Testing at the speed and scale of Google. Google Testing Blog. https://testing.googleblog.com/2011/06/testing-at-speed-and-scale-of-google.html

IEEE. (2008). IEEE standard for software and system test documentation (IEEE Std 829-2008) (pp. 1–150). IEEE. https://doi.org/10.1109/IEEESTD.2008.4578383

Kim, J., & Porter, A. (2002). A history-based test prioritization technique for regression testing in resource-constrained environments. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA) (pp. 119–129). ACM. https://doi.org/10.1145/581339.581357

Kruchten, P., Hilliard, R., Kazman, R., Kozaczynski, W., Obbink, H., & Ran, A. (2002). Workshop on methods and techniques for software architecture review and assessment (SARA). In Proceedings of the 24th International Conference on Software Engineering (ICSE ’02) (p. 675). ACM. https://doi.org/10.1145/581339.581439

Leung, H. K. N., & White, L. J. (1990). A study of integration testing and software regression at the integration level. In Proceedings of the Conference on Software Maintenance (pp. 290–301). IEEE. https://doi.org/10.1109/ICSM.1990.131377

Lou, Y., Chen, J., Zhang, L., & Hao, D. (2019). A survey on regression test-case prioritization. Advances in Computers, 113, 1–46. https://doi.org/10.1016/bs.adcom.2018.10.001

Maspupah, A., Rahmani, A., & Min, J. (2021). Comparative study of regression testing tools feature on unit testing. Journal of Physics: Conference Series, 1869(1), 012098. https://doi.org/10.1088/1742-6596/1869/1/012098

Microsoft. (2025). Azure DevOps Server. https://learn.microsoft.com/azure/devops/server/

Pighin, M., & Marzona, A. (2005). Optimizing test to reduce maintenance. In Proceedings of the IEEE International Conference on Software Maintenance (ICSM) (pp. 465–472). IEEE. https://doi.org/10.1109/ICSM.2005.69

Poliarush, M. (2025, August). Agile regression testing: Process & best practices. Software Testing Help. https://testomat.io/blog/agile-regression-testing/

Powell, P., & Smalley, I. (2025, June). What is regression testing? TestingXperts Blog. https://www.testingxperts.com/blog/what-is-regression-testing/

Reinisch, C., Granzer, W., Praus, F., & Kastner, W. (2008). Integration of heterogeneous building automation systems using ontologies. In Proceedings of the 34th Annual Conference of IEEE Industrial Electronics (pp. 2736–2741). IEEE. https://doi.org/10.1109/IECON.2008.4758391

Ren, J., Xu, Z., Yu, C., Lin, C., Wu, G., & Tan, G. (2019). Execution allowance-based fixed-priority scheduling for probabilistic real-time systems. Journal of Systems and Software, 152, 120–133. https://doi.org/10.1016/j.jss.2019.03.001

Rothermel, G., & Harrold, M. J. (1996). Analyzing regression test selection techniques. CORE Repository, University of Nebraska–Lincoln. https://doi.org/10.1109/32.536955

Rothermel, G., & Harrold, M. J. (1997). A safe, efficient regression test selection technique. ACM Transactions on Software Engineering and Methodology, 6(2), 173–210. https://doi.org/10.1145/248233.248262

Saha, T., & Palit, R. (2019). Practices of software testing techniques and tools in Bangladesh software industry. In Proceedings of the IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) (pp. 1–10). IEEE. https://doi.org/10.1109/CSDE48274.2019.9162355

Sharif, A., Marijan, D., & Liaaen, M. (2021). DeepOrder: Deep learning for test case prioritization in continuous integration testing. arXiv Preprint, arXiv:2110.07443. https://doi.org/10.48550/arXiv.2110.07443

Tufano, M., Watson, C., Bavota, G., Di Penta, M., White, M., & Poshyvanyk, D. (2019). An empirical study on learning bug-fixing patches in the wild via neural machine translation. In Proceedings of the International Conference on Software Engineering (ICSE) (pp. 1–11). IEEE. https://doi.org/10.48550/arXiv.1812.08693

Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: A survey. Software Testing, Verification and Reliability, 22(2), 67–120. https://doi.org/10.1002/stvr.430

Downloads

Published

2025-12-31

How to Cite

Saha, T., & Siam, M. (2025). A Comparative Study of Regression Testing Techniques: An Industry-Oriented Evaluation of Efficiency, Coverage and Cost. Journal of Innovation and Technology, 2025(2), 1–11. https://doi.org/10.61453/joit.v2025no34