Plenary lecture 3: swarm intelligence algorithms parameter tuning

Most real-life problems can be represented as some kind of optimization problem. Easy optimization problems were solved long time ago so nowadays only hard problems are of research interest. Many discrete (combinatorial) as well as some continuous optimization problems are intractable, but of great practical interest. Traveling salesman problem (TSP) is a classic example that was researched for the longest period of time and because of that is often used as a benchmark. The main problem with hard optimization problems is that there is enormous number of suboptimal solutions or local minima and there is no guidance how to search. Standard down-hill methods in this situation fail. Typical example of such function that is used as a benchmark is Rastrigin function that is a sphere modified by small cosine waves. The oldest way to deal with such problems is Monte-Carlo method. It is equivalent of trying to find the deepest point in the oceans by measuring many times the depth at random locations and hoping that best measurement will be close to the global optimum. While Monte-Carlo method is usable for some applications, its blind search is not sufficient for many others. In this rather hopeless situation researchers turned from mathematically exact methods to belief. The nature is doing miraculous things. We know the results but we do not understand the mechanism. For hard optimization problems we try to mimic some nature processes. Older attempts included simulation of evolution (through genetic modifications and survival of the fittest) and simulated annealing. Recently, swarm intelligence become prominent using the fact that extremely simple individuals exhibit miraculous collective intelligence. Examples include ants colonies, honey bees colonies, flocks of birds, schools of fish etc. These nature inspired metaheuristics simulate various natural phenomena. We talk about bee colony food finding or ant colony path finding, but in essence, in all these diverse mimicking we do two things. We exploit good found solutions, but also go to unknown places in order to avoid being trapped in local minima. The successfulness of any such algorithm is determined by proper balance between exploitation and exploration. This balance is maintained by adjusting certain parameters and also by applying some rules in certain situations. By doing such adjustments algorithm can become much better for some class of problems (off course, according to NFL theorem, it cannot become universally good for all problems). This plenary lecture will demonstrate few successful examples of such adjustments.