Parallelization of the FICO Xpress-Optimizer

Computing hardware has mostly thrashed out the physical limits for speeding up individual computing cores. Consequently, the main line of progress for new hardware is growing the number of computing cores within a single CPU. This makes the study of efficient parallelization schemes for computation-intensive algorithms more and more important. A natural precondition to achieving reasonable speedups from parallelization is maintaining a high workload of the available computational resources. At the same time, reproducibility and reliability are key requirements for software that is used in industrial applications. In this paper, we present the new parallelization concept for the state-of-the-art MIP solver FICO Xpress-Optimizer. MIP solvers like Xpress are expected to be deterministic. This inevitably results in synchronization latencies which render the goal of a satisfying workload a challenge in itself. We address this challenge by following a partial information approach and separating the concepts of simultaneous tasks and independent threads from each other. Our computational results indicate that this leads to a much higher CPU workload and thereby to an improved, almost linear, scaling on modern high-performance CPUs. As an added value, the solution path that Xpress takes is not only deterministic in a fixed environment, but also, to a certain extent, thread-independent. This paper is an extended version of Berthold et al. [Parallelization of the FICO Xpress-Optimizer, in Mathematical Software – ICMS 2016: 5th International Conference, G.-M. Greuel, T. Koch, P. Paule, and A. Sommere, eds., Springer International Publishing, Berlin, 2016, pp. 251–258] containing more detailed technical descriptions, illustrative examples and updated computational results.

[1]  Claude-Guy Quimper,et al.  Integration of AI and OR Techniques in Constraint Programming , 2016, Lecture Notes in Computer Science.

[2]  Yuji Shinano,et al.  Distributed Domain Propagation , 2017, SEA.

[3]  Matthew J. Saltzman,et al.  Parallel branch, cut, and price for large-scale discrete optimization , 2003, Math. Program..

[4]  George B. Dantzig,et al.  Decomposition Principle for Linear Programs , 1960 .

[5]  Gautam Mitra,et al.  A two‐stage parallel branch and bound algorithm for mixed integer programs , 2004 .

[6]  Ailsa H. Land,et al.  An Automatic Method of Solving Discrete Programming Problems , 1960 .

[7]  Jeff Linderoth,et al.  Topics in parallel integer optimization , 1998 .

[8]  Matteo Fischetti,et al.  Improving branch-and-cut performance by random sampling , 2016, Math. Program. Comput..

[9]  Yuji Shinano,et al.  ParaLEX: A Parallel Extension for the CPLEX Mixed Integer Optimizer , 2007, PVM/MPI.

[10]  William J. Cook,et al.  Computational experience with parallel mixed integerprogramming in a distributed environment , 1999, Ann. Oper. Res..

[11]  Qi Huangfu,et al.  Parallelizing the dual revised simplex method , 2015, Mathematical Programming Computation.

[12]  Tobias Achterberg,et al.  Mixed Integer Programming: Analyzing 12 Years of Progress , 2013 .

[13]  Michael C. Ferris,et al.  Grid-Enabled Optimization with GAMS , 2009, INFORMS J. Comput..

[14]  Thorsten Koch,et al.  ParaSCIP: A Parallel Extension of SCIP , 2010, CHPC.

[15]  Tobias Achterberg,et al.  Constraint integer programming , 2007 .

[16]  Ted K. Ralphs,et al.  Parallel Branch and Cut , 2006 .

[17]  Timo Berthold,et al.  Parallelization of the FICO Xpress-Optimizer , 2016, ICMS.

[18]  Thorsten Koch,et al.  Parallel Solvers for Mixed Integer Linear Programming , 2016 .

[19]  Thorsten Koch,et al.  Solving Hard MIPLIB2003 Problems with ParaSCIP on Supercomputers: An Update , 2014, 2014 IEEE International Parallel & Distributed Processing Symposium Workshops.

[20]  Andrea Lodi,et al.  MIPLIB 2010 , 2011, Math. Program. Comput..

[21]  Matteo Fischetti,et al.  Self-splitting of Workload in Parallel Computation , 2014, CPAIOR.

[22]  Ted K. Ralphs,et al.  Integer and Combinatorial Optimization , 2013 .

[23]  Thorsten Koch,et al.  Solving Open MIP Instances with ParaSCIP on Supercomputers Using up to 80,000 Cores , 2016, 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS).

[24]  Qun Chen,et al.  FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver , 2001, Ann. Oper. Res..

[25]  W. Marsden I and J , 2012 .

[26]  Richard Laundy,et al.  Solving Hard Mixed-Integer Programming Problems with Xpress-MP: A MIPLIB 2003 Case Study , 2009, INFORMS J. Comput..

[27]  Louis Wehenkel,et al.  Machine Learning to Balance the Load in Parallel Branch-and-Bound , 2015 .

[28]  Jonathan Eckstein,et al.  Parallel Branch-and-Bound Algorithms for General Mixed Integer Programming on the CM-5 , 1994, SIAM J. Optim..

[29]  Matthew J. Saltzman,et al.  Computational Experience with a Software Framework for Parallel Integer Programming , 2009, INFORMS J. Comput..

[30]  Cynthia A. Phillips,et al.  Pico: An Object-Oriented Framework for Parallel Branch and Bound * , 2001 .

[31]  Marek Olszewski,et al.  Kendo: efficient deterministic multithreading in software , 2009, ASPLOS.

[32]  G. Nemhauser,et al.  Integer Programming , 2020 .

[33]  Timo Berthold,et al.  A First Implementation of ParaXpress: Combining Internal and External Parallelization to Solve MIPs on Supercomputers , 2016, ICMS.

[34]  G. Mitra,et al.  A Distributed Processing Algorithm for Solving Integer Programs Using a Cluster of Workstations , 1997, Parallel Comput..