Range juggling for better convergence in genetic range genetic algorithms

One of the most important things in evolutional algorithms is to find global solution as stable as possible. As we know that it cost too much computational cost, even if we are not sure the results is really global or not, we do not want to re-run evolutional algorithms to make sure its final results. Therefore, evolutional algorithms need to include as much effort as possible to let the user feel relieved that they got close to global solution. In this paper, development of range juggling in Genetic Range Genetic Algorithms (GRGA) for better convergence is described. Genetic Range Genetic Algorithms is updated version of Adaptive Range Genetic Algorithms (ARGA). In ARGA, searching range differ every generation, in the initial stages, searching range try to find the range that include global optimum solution. In the very last stages, it tries to converge to global solution, and the searching range is beginning to shrink in order to raise the accuracy of the solution. Therefore it is very important to choose system parameter especially for the initial stage not to trap into local solution and also not to move too fast to overshoot global solution. Not like ARGA, GRGA is free from critical settings of parameters, but it has some short comings in first convergence, because once searching range is given it does not change before the range is diminished. For better convergence, techniques of range juggling is proposed and examined in this paper. Through numerical experiments, it turned out that it has better convergence and accuracy in simple problem. Even in the case that has a large number of design variables, it can reach close to global optimum solutions.