Hybrid genetic algorithm and simulated annealing (HGASA) in global function optimization

We have implemented the sequential HGASA on a Sun Workstation machine; its performance seems to be very good in finding the global optimum of a sample function optimization problem as compared with some sequential optimization algorithms that offer low efficiency and limited reliability. However, the sequential HGASA generally needs a long run time cost. So we implemented a parallel HGASA using message passing interface (MPI) on a high performance computer and performed many tests using a set of frequently used function optimization problems. The performance analysis of this parallel approach has been done on IBM Beowulf PCs cluster in terms of program execution time, relative speed up and efficiency