Stress-Testing Memcomputing on Hard Combinatorial Optimization Problems

Memcomputing is a novel computing paradigm that employs time non-local dynamical systems to compute with and in memory. The digital version of these machines [digital memcomputing machines or (DMMs)] is scalable, and is particularly suited to solve combinatorial optimization problems. One of its possible realizations is by means of standard electronic circuits, with and without memory. Since these elements are non-quantum, they can be described by ordinary differential equations. Therefore, the circuit representation of DMMs can also be simulated efficiently on our traditional computers. We have indeed previously shown that these simulations only require time and memory resources that scale linearly with the problem size when applied to finding a good approximation to the optimum of hard instances of the maximum-satisfiability problem. The state-of-the-art algorithms, instead, require exponential resources for the same instances. However, in that work, we did not push the simulations to the limit of the processor used. Since linear scalability at smaller problem sizes cannot guarantee linear scalability at much larger sizes, we have extended these results in a stress-test up to $64\times 10^{6}$ variables (corresponding to about 1 billion literals), namely the largest case that we could fit on a single core of an Intel Xeon E5-2860 with 128 GB of dynamic random-access memory (DRAM). For this test, we have employed a commercial simulator, Falcon of MemComputing, Inc. We find that the simulations of DMMs still scale linearly in both time and memory up to these very large problem sizes versus the exponential requirements of the state-of-the-art solvers. These results further reinforce the advantages of the physics-based memcomputing approach compared with traditional ones.

[1]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[2]  Fabio L. Traversa,et al.  Evidence of an exponential speed-up in the solution of hard optimization problems , 2017, Complex..

[3]  Fabio L. Traversa,et al.  Instantons in self-organizing logic gates , 2017, ArXiv.

[4]  Steven H. Adachi,et al.  Application of Quantum Annealing to Training of Deep Neural Networks , 2015, ArXiv.

[5]  Bart Selman,et al.  Satisfiability Solvers , 2008, Handbook of Knowledge Representation.

[6]  Wenlong Wang,et al.  Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering. , 2014, Physical review. E, Statistical, nonlinear, and soft matter physics.

[7]  O. Nelles,et al.  An Introduction to Optimization , 1996, IEEE Antennas and Propagation Magazine.

[8]  Juraj Hromkovic,et al.  Algorithmics for hard problems - introduction to combinatorial optimization, randomization, approximation, and heuristics , 2001 .

[9]  David S. Johnson,et al.  Computers and Intractability: A Guide to the Theory of NP-Completeness , 1978 .

[10]  Fabio L. Traversa,et al.  Absence of chaos in Digital Memcomputing Machines with solutions , 2017, ArXiv.

[11]  Massimiliano Di Ventra,et al.  The parallel approach , 2013 .

[12]  Johan Håstad,et al.  Some optimal inapproximability results , 2001, JACM.

[13]  William J. Cook,et al.  Combinatorial optimization , 1997 .

[14]  Fabio L. Traversa,et al.  MemComputing Integer Linear Programming , 2018, ArXiv.

[15]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[16]  S. Kobe,et al.  Ground States, Energy Landscape, and Low-Temperature Dynamics of ±J Spin Glasses , 2006, Computational Complexity and Statistical Physics.

[17]  Bart Selman,et al.  Incomplete Algorithms , 2021, Handbook of Satisfiability.

[18]  Fabio L. Traversa,et al.  Topological Field Theory and Computing with Instantons , 2016, ArXiv.

[19]  Fabio L. Traversa,et al.  On the Universality of Memcomputing Machines , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[20]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[21]  Ryan Babbush,et al.  What is the Computational Value of Finite Range Tunneling , 2015, 1512.02206.

[22]  Massimiliano Di Ventra,et al.  Absence of periodic orbits in digital memcomputing machines with solutions. , 2017, Chaos.

[23]  Fabio L. Traversa,et al.  Universal Memcomputing Machines , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[24]  Shaowei Cai,et al.  From Decimation to Local Search and Back: A New Approach to MaxSAT , 2017, IJCAI.

[25]  Massimiliano Di Ventra,et al.  Polynomial-time solution of prime factorization and NP-hard problems with digital memcomputing machines , 2015, Chaos.

[26]  Alexander Schrijver,et al.  Theory of linear and integer programming , 1986, Wiley-Interscience series in discrete mathematics and optimization.

[27]  Fabio L. Traversa,et al.  Accelerating Deep Learning with Memcomputing , 2018, Neural Networks.

[28]  Fabio L. Traversa,et al.  Memcomputing: Leveraging memory and physics to compute efficiently , 2018, ArXiv.

[29]  Bart Selman,et al.  From Spin Glasses to Hard Satisfiable Formulas , 2004, SAT.

[30]  Andrea Montanari,et al.  Analyzing Search Algorithms with Physical Methods , 2006, Computational Complexity and Statistical Physics.

[31]  Kenneth Steiglitz,et al.  Combinatorial Optimization: Algorithms and Complexity , 1981 .

[32]  Fabrizio Bonani,et al.  Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states , 2014, Science Advances.

[33]  Riccardo Zecchina,et al.  Simplest random K-satisfiability problem , 2000, Physical review. E, Statistical, nonlinear, and soft matter physics.

[34]  J. Hale Asymptotic Behavior of Dissipative Systems , 1988 .