Stability bounds and almost sure convergence of improved particle swarm optimization methods

Particle swarm optimization (PSO) is a member of nature-inspired metaheuristic algorithms. Its formulation is simple and does not need the computation of derivatives. It and its many variants have been applied to many different types of optimization problems across several disciplines. There have been many attempts to study the convergence properties of PSO, but a rigorous and complete proof of its almost sure convergence to the global optimum is still lacking. We propose two modified versions of PSO and prove their convergence to the global optimum. We conduct simulation studies to gain further insights into their properties and evaluate their performance relative to PSO.

[1]  James Kennedy,et al.  Defining a Standard for Particle Swarm Optimization , 2007, 2007 IEEE Swarm Intelligence Symposium.

[2]  Craig W. Reynolds Flocks, herds, and schools: a distributed behavioral model , 1987, SIGGRAPH.

[3]  Gang George Yin,et al.  Analyzing Convergence and Rates of Convergence of Particle Swarm Optimization Algorithms Using Stochastic Approximation Methods , 2013, IEEE Transactions on Automatic Control.

[4]  Xin T. Tong,et al.  Nonlinear stability and ergodicity of ensemble based Kalman filters , 2015, 1507.08307.

[5]  H. Kushner,et al.  Stochastic Approximation and Recursive Algorithms and Applications , 2003 .

[6]  Xi Chen,et al.  On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics , 2019, J. Mach. Learn. Res..

[7]  James Kennedy,et al.  Particle swarm optimization , 2002, Proceedings of ICNN'95 - International Conference on Neural Networks.

[8]  Maurice Clerc,et al.  The particle swarm - explosion, stability, and convergence in a multidimensional complex space , 2002, IEEE Trans. Evol. Comput..

[9]  J. Kiefer,et al.  Stochastic Estimation of the Maximum of a Regression Function , 1952 .

[10]  Xin Chen,et al.  A Modified PSO Structure Resulting in High Exploration Ability With Convergence Guaranteed , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[11]  Martin J. Wainwright,et al.  Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients , 2019, J. Mach. Learn. Res..

[12]  Andrew J Majda,et al.  Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation , 2015, 1507.08319.

[13]  Jonathan C. Mattingly,et al.  Ergodicity of the 2D Navier-Stokes equations with degenerate stochastic forcing , 2004, math/0406087.

[14]  R. Eberhart,et al.  Comparing inertia weights and constriction factors in particle swarm optimization , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).

[15]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[16]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[17]  A. Majda,et al.  Geometric Ergodicity for Piecewise Contracting Processes with Applications for Tropical Stochastic Lattice Models , 2016 .

[18]  Ioan Cristian Trelea,et al.  The particle swarm optimization algorithm: convergence analysis and parameter selection , 2003, Inf. Process. Lett..

[19]  Shai Ben-David,et al.  Understanding Machine Learning: From Theory to Algorithms , 2014 .

[20]  James M. Whitacre Recent trends indicate rapid growth of nature-inspired optimization in academia and industry , 2011, Computing.

[21]  H. Robbins A Stochastic Approximation Method , 1951 .

[22]  Jonathan C. Mattingly,et al.  Asymptotic coupling and a general form of Harris’ theorem with applications to stochastic delay equations , 2009, 0902.4495.

[23]  Andrew J. Majda,et al.  Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion , 2016, J. Nonlinear Sci..