Utilization of Parallel Computing for Discrete Self-organizing Migration Algorithm

Evolutionary algorithms can take advantage of parallel computing, because it decreases the computational time and increases the size of processable instances. In this chapter, various options for a parallelization of Discrete Self-Organising Migrating Algorithm are described, with three implemented parallel variants described in greater detail. They covers the most frequently used hardware and software technologies, namely: parallel computing with threads and shared memory; general purpose programming on GPUs with CUDA; and distributed computing with MPI. The first two implementations speed up the computation, the last one moreover changes the original algorithm. It adds a new layer that simplifies its usage in the distributed environment.

[1]  Lars Michael Kristensen,et al.  Coloured Petri Nets - Modelling and Validation of Concurrent Systems , 2009 .

[2]  Stanislav Böhm,et al.  Visual programming of MPI applications: Debugging, performance analysis, and performance prediction , 2014, Comput. Sci. Inf. Syst..

[3]  Dario Izzo,et al.  On the impact of the migration topology on the Island Model , 2010, Parallel Comput..

[4]  David Kaeli,et al.  Heterogeneous Computing with OpenCL 2.0 , 2015 .

[5]  Ian T. Foster,et al.  Designing and building parallel programs - concepts and tools for parallel software engineering , 1995 .

[6]  Juan Julián Merelo Guervós,et al.  Diversity Through Multiculturality: Assessing Migrant Choice Policies in an Island Model , 2011, IEEE Transactions on Evolutionary Computation.

[7]  Tetsuyuki Takahama,et al.  Island-based differential evolution with varying subpopulation size , 2013, 2013 IEEE 6th International Workshop on Computational Intelligence and Applications (IWCIA).

[8]  Magdalena Metlicka,et al.  GPU accelerated NEH algorithm , 2014, 2014 IEEE Symposium on Computational Intelligence in Production and Logistics Systems (CIPLS).

[9]  Matthew Scarpino OpenCL in Action: How to Accelerate Graphics and Computations , 2011 .

[10]  Youngmin Kim,et al.  Accelerating MATLAB without GPU , 2014 .

[11]  Martín Pedemonte,et al.  A survey on parallel ant colony optimization , 2011, Appl. Soft Comput..

[12]  Jason Sanders,et al.  CUDA by example: an introduction to general purpose GPU programming , 2010 .

[13]  Tom White,et al.  Hadoop: The Definitive Guide , 2009 .

[14]  Timothy G. Mattson,et al.  OpenCL Programming Guide , 2011 .

[15]  Arch D. Robison,et al.  Chapter 3 – Patterns , 2012 .

[16]  David Kaeli,et al.  Chapter 4 – Examples , 2015 .

[17]  Nicholas Wilt,et al.  The CUDA Handbook: A Comprehensive Guide to GPU Programming , 2013 .

[18]  Stanislav Böhm,et al.  Kaira: Development Environment for MPI Applications , 2014, Petri Nets.

[19]  Michal Pluhacek,et al.  Utilising the chaos-induced discrete self organising migrating algorithm to solve the lot-streaming flowshop scheduling problem with setup time , 2014, Soft Comput..

[20]  Enrique Alba,et al.  Parallel metaheuristics: recent advances and new trends , 2012, Int. Trans. Oper. Res..

[21]  Arch D. Robison,et al.  Structured Parallel Programming: Patterns for Efficient Computation , 2012 .

[22]  Éric D. Taillard,et al.  Parallelization Strategies for Hybrid Metaheuristics Using a Single GPU and Multi-core Resources , 2012, PPSN.

[23]  Roman Senkerik,et al.  Discrete Self-Organising Migrating Algorithm for flow-shop scheduling with no-wait makespan , 2013, Math. Comput. Model..