انتخاب موارد آزمون مبتنی بر روش تولید آزمون رانه
محورهای موضوعی : فناوری اطلاعات و ارتباطاتزهره مافی 1 * , سیدحسن میریان حسین آبادی 2
1 - پژوهشگاه ارتباطات و فناوری اطلاعات
2 - دانشگاه صنعتی شریف
کلید واژه: آزمون نرم افزار, تولید آزمون رانه, آزمون بازگشت, اختلاف دو برنامه, بخش بندی, کنترل نسخه,
چکیده مقاله :
تولید آزمونرانه (TDD)یکی از شیوه های تولید نرم افزار اول آزمون است که در آن تولید هر جزء از کد با نوشتن آزمون شروع میگردد. این شیوه به دلیل مزایای زیاد از جمله تولید کد خوانا، منظم، کوتاه و همچنین بالا بردن کیفیت، بهره وری و قابلیت اطمینان کد و امکان آزمون بازگشت به دلیل ایجاد مجموعهی جامع آزمون بازگشت، موردتوجه قرار گرفته است. تعداد زیاد موارد آزمون واحد تولیدشده در این روش به عنوان نقطه قوتی در جهت افزایش قابلیت اطمینان به کد مطرح است با این حال اجرای مکرر موارد آزمون، باعث بالا رفتن مدت زمان آزمون بازگشت در این شیوه است. هدف این مقاله ارائه الگوریتمی جهت انتخاب موارد آزمون جهت کاهش زمان آزمون بازگشت در شیوه تولید آزمونرانه است. تاکنون ایده های مختلفی برای انتخاب موارد آزمون و کاهش زمان آزمون بازگشت مطرح شده است. اغلب این ایدهها مبتنی بر زبان برنامهنویسی و شیوه تولید نرمافزار است. ایده ارائه شده در این مقاله مبتنی بر روش اختلاف برنامه و ماهیت شیوه تولید آزمونرانه اتخاذ گردیده است. در این روش ارتباط های معنایی و ساختاری معنا دار بین آزمونهای واحد و قطعات کد برنامه ایجاد میشود و انتخاب موارد آزمون با استفاده از این ارتباطها انجام میگردد.
Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short code, as well as increasing the quality, productivity and reliability, and the possibility of regression testing due to the creation of a comprehensive set of unit tests. The large number of unit test cases produced in this method is considered as a strong point in order to increase the reliability of the code, however, the repeated execution of test cases increases the duration of the regression testing in this method. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in TDD method. So far, various ideas have been proposed to select test cases and reduce the regression test time. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method. In this method, meaningful semantic and structural connections are created between unit tests and code blocks, and the test case selection is done based on these relationships.
[1] K. Beck, Test Driven Development: By Example, Addison-Wesley, 2002.
[2] M. Fowler, K. Beck, J. Brant, W. Opdyke and D. Roberts, Refactoring: Improving the Design of Existing Code, 1st ed., E. Gamma, Ed., Pearson Education India, 1999.
[3] L. Madeyski, "Emperical Studies on the impact of test first programming," Technical Report I32/09/, Wroclaw University of Technology, Institute of Informatics, 2009.
[4] W. Bissi, A. G. Serra-Seca-Neto and M. C. F. Pereira Emer, "The effects of test driven development on internal quality, external quality and productivity: A systematic review," Information and Software Technology, vol. 74, pp. 45-54, 2016.
[5] S. Mäkinen and J. Münch, "Effects of Test-Driven Development: A Comparative Analysis of Empirical Studies," Proceedings of the 6th International Conference Software Quality, vol. 6, pp. 155-169, 2014.
[6] R. H. Rosero, O. S. Gómez and G. Rodríguez, "15 years of software regression testing techniques—a survey," International Journal of Software Engineering and Knowledge Engineering, vol. 26, no. 5, pp. 675-689, 2016).
[7] D. Parsons, T. Susnjak and M. Lange, "Influences on regression testing strategies in agile software development environments," Software Quality Journal, vol. 22, no. 4, p. 717–739, 2014.
[8] A. Nanthaamornphong and J. C. Carver, "Test-Driven Development in scientific software: a survey," Software Quality Journal, vol. 25, no. 2, pp. 343-372, 2017.
[9] ز. مافی and س. ح. میریان حسین آبادی, "یک روش تولید آزمونرانه بهبود یافته," in کنفرانس ملی فناوری های نوین در مهندسی برق و کامپیوتر, اصفهان, 1396.
[10] S. Yoo and M. Harman, "Regression testing minimization, selection and prioritization: a survey," Software Testing, Verification and Reliability, vol. 22, no. 2, pp. 67-120, 2012.
[11] E. W. Myers, "An O(ND) Difference Algorithm and Its Variations," Algorithmica, vol. 1, no. 1-4, pp. 251-266, 1986.
[12] F. I. Vokolos and P. Frankl, "Empirical evaluation of the textual differencing regression testing technique," in IEEE International Conference on Software Maintenance (Cat. No. 98CB36272), 1998.
[13] G. Canfora, L. Cerulo and M. D. Penta, "LDiff: an Enhanced Line Differencing Tool," in 31st International Conference on Software Engineering. IEEE Computer Society, 2009.
[14] M. Asaduzzaman, C. Roy, K. Schneider and M. Di Penta, "LHDiff: A language-independent hybrid approach for tracking source code lines," in IEEE International Conference on Software Maintenance, 2013.
[15] W. Yang, "Identifying syntactic differences between two programs," Software: Practice and Experience, vol. 21, no. 7, pp. 739-755, 1991.
[16] J. I. Maletic and M. L. Collard, "Supporting Source Code Difference Analysis," in 20th IEEE International Conference on Software Maintenance, 2004.
[17] D. Archambault, "Structural differences between two graphs through hierarchies," in Proceedings of Graphics Interface , 2009.
[18] A. Goto, N. Yoshida, M. Ioka, E. Choi and K. Inoue, "How to extract differences from similar programs? A cohesion metric approach," in 7th International Workshop on Software Clones (IEEE Press), 2013.
[19] M. Linares-Vásquez, L. Cortés-Coy, J. Aponte and D. Poshyvanyk, "Changescribe: A tool for automatically generating commit messages," in 37th IEEE International Conference on Software Engineering, 2015.
[20] M. Kim and N. David, "Discovering and representing systematic code changes," in IEEE 31st International Conference on Software Engineering, 2009 .
[21] J.-R. Falleri, M. Floréal, B. Xavier, M. Matias and M. Martin, "Fine-grained and Accurate Source Code Differencing," in 29th ACM/IEEE international conference on Automated software engineering, 2014.
[22] X. Wang, L. Pollock and K. Vijay-Shanker, "Automatic segmentation of method code into meaningful blocks to improve readability," in 18th Working Conference on Reverse Engineering IEEE, 2011.
[23] S. Horwitz, "Identifying The Semantic and Textual Differences Between Two Versions of a Program ," ACM, vol. 25, no. 6, pp. 234-245, 1990.
[24] D. Binkley, "Using semantic differencing to reduce the cost of regression testing," in IEEE Conference on Software Maintenance , 1992.
[25] N. Iulian, F. S. Jeffrey and M. Hicks, "Understanding Source Code Evolution Using Abstract Syntax Tree Matching," ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4, pp. 1-5, 2005.
[26] T. Apiwattanapong, A. Orso and M. J. Harrold, "JDiff: A differencing technique and tool for object-oriented programs," in Automated Software Engineering, 2007.
[27] M. Görg and J. Zhao, "Identifying semantic differences in AspectJ programs," in 18th international symposium on Software testing and analysis (ACM), 2009.
[28] T. Wang, K. Wang, X. SU and P. MA, "Detection of semantically similar code," Frontiers of Computer Science, vol. 8, no. 6, pp. 996-1011, 2014.
[29] H. A. Nguyen, T. T. Nguyen, H. V. Nguyen and T. N. Nguyen, "idiff: Interaction-based program differencing tool," in 26th IEEE/ACM International Conference on Automated Software Engineering, 2011.