Introduction to doubling. A useful understanding of an algorithm’s efficiency, the worst-case time complexity gives an upper bound on how an increase in the size of the input, denoted n, increases the execution time of the algorithm, or f(n). This relationship is often expressed in the “big-Oh” notation, where f(n) is O(g(n)) means that the time increases by no more than on order of g(n). Since the worst-case complexity of an algorithm is evident when n is large [?], one approach for determining the big-Oh complexity of an algorithm is to conduct a doubling experiment with increasingly bigger input sizes. By measuring the time needed to run the algorithm on inputs of size n and 2n, the algorithm’s order of growth can be determined [?]. The goal of a doubling experiment is to draw a conclusion regarding the efficiency of the algorithm from the ratio f(2n)/f(n) that represents the factor of change in runtime from inputs of size n and 2n. For instance, a ratio of 2 would indicate that doubling the input size resulted in the runtime’s doubling, leading to the conclusion that the algorithm under study is O(n) or O(n log n). Table 1 shows some common time complexities and corresponding ratios.
[1]
Catherine C. McGeoch.
A Guide to Experimental Algorithmics
,
2012
.
[2]
Phil McMinn,et al.
Automatically Evaluating the Efficiency of Search-Based Test Data Generation for Relational Database Schemas
,
2015,
SEKE.
[3]
Gordon Fraser,et al.
1600 faults in 100 projects: automatically finding faults while achieving high coverage with EvoSuite
,
2015,
Empirical Software Engineering.
[4]
Dirk Sudholt,et al.
Design and analysis of different alternating variable searches for search-based software testing
,
2015,
Theor. Comput. Sci..
[5]
Phil McMinn,et al.
Search-Based Testing of Relational Schema Integrity Constraints Across Multiple Database Management Systems
,
2013,
2013 IEEE Sixth International Conference on Software Testing, Verification and Validation.