Abstract
This paper presents a survey on different showcases for potential measures on particle swarm optimization (PSO). First, a potential is analyzed to prove convergence to non-optimal points. Second, one can apply a minor modification to PSO to prevent convergence to non-optimal points by using an easy potential measure. Finally, analyzing this potential measure yields a reliable stopping criterion for the modified PSO.
About the authors

Bernd Bassimir is a doctoral student at the University of Erlangen-Nuremberg, where he already received his Master of Science in Computer Science. His research interests are scheduling problems, with an emphasis on robust optimization under uncertainty and a deeper understanding of the exploration and exploitation properties of Particle Swarm Optimization (PSO).

Alexander Raß is a doctoral student at the University of Erlangen-Nuremberg, where he already received his Master of Science in Mathematics. His research interests are runtime analysis of algorithms working on discrete domains and convergence analysis of algorithms working on continuous domains. In both cases the focus is on Particle Swarm Optimization (PSO). Additionally he established an open-source project for PSO with very high and adaptive precision.

Manuel Schmitt received his diploma degree in Mathematics from the University of Erlangen-Nuremberg, Germany, in 2011 and his doctorate degree from the same university in 2015. His current affiliation is the Julianum high school in Helmstedt, Germany. His research interests are the satisfiability problem and the analysis of meta-heuristics for black-box optimization problems, particularly the analysis of Particle Swarm Optimization (PSO).
References
1. B. Bassimir, M. Schmitt, and R. Wanka. How much forcing is necessary to let the results of particle swarms converge? In Proc. Int. Conf. on Swarm Intelligence Based Optimization (ICSIBO), pages 98–105, 2014.10.1007/978-3-319-12970-9_11Search in Google Scholar
2. B. Bassimir, M. Schmitt, and R. Wanka. Self-adaptive potential-based stopping criteria for particle swarm optimization. arXiv, abs/1906.08867, 2019.Search in Google Scholar
3. M. Clerc and J. Kennedy. The particle swarm – explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6:58–73, 2002.10.1109/4235.985692Search in Google Scholar
4. R. C. Eberhart and J. Kennedy. A new optimizer using particle swarm theory. In Proc. 6th International Symposium on Micro Machine and Human Science, pages 39–43, 1995.10.1109/MHS.1995.494215Search in Google Scholar
5. M. Jiang, Y. P. Luo, and S. Y. Yang. Particle swarm optimization – stochastic trajectory analysis and parameter selection. In F. T. S. Chan and M. K. Tiwari, editors, Swarm Intelligence – Focus on Ant and Particle Swarm Optimization, pages 179–198. 2007. Corrected version of [6].10.5772/5104Search in Google Scholar
6. M. Jiang, Y. P. Luo, and S. Y. Yang. Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm. Information Processing Letters, 102:8–16, 2007. Corrected by [5].10.1016/j.ipl.2006.10.005Search in Google Scholar
7. J. Kennedy and R. C. Eberhart. Particle swarm optimization. In Proc. IEEE International Conference on Neural Networks, volume 4, pages 1942–1948, 1995.10.1109/ICNN.1995.488968Search in Google Scholar
8. P. K. Lehre and C. Witt. Finite first hitting time versus stochastic convergence in particle swarm optimisation. arXiv:1105.5540, 2011.Search in Google Scholar
9. A. Raß. High precision particle swarm optimization (HiPPSO). https://github.com/alexander-rass/HiPPSO/, 2018.Search in Google Scholar
10. A. Raß, M. Schmitt, and R. Wanka. Explanation of stagnation at points that are not local optima in particle swarm optimization by potential analysis. arXiv, abs/1504.08241, 2015.Search in Google Scholar
11. A. Raß, M. Schmitt, and R. Wanka. Explanation of Stagnation at Points that are not Local Optima in Particle Swarm Optimization by Potential Analysis. In Companion of Proc. 17th Genetic and Evolutionary Computation Conference (GECCO), pages 1463–1464, U. ACM New York, NY, 2015.10.1145/2739482.2764654Search in Google Scholar
12. C. C. Ribeiro, I. Rosseti, and R. C. Souza. Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations. In Proc. 5th Int. Conf. on Learning and Intelligent Optimization (LION), pages 146–160, 2011.10.1007/978-3-642-25566-3_11Search in Google Scholar
13. M. Schmitt and R. Wanka. Particle swarm optimization almost surely finds local optima. In Proc. 15th Genetic and Evolutionary Computation Conference (GECCO), pages 1629–1636, 2013.10.1145/2463372.2463563Search in Google Scholar
14. M. Schmitt and R. Wanka. Particle swarm optimization almost surely finds local optima. Theoretical Computer Science, Part A, 561:57–72, 2015.10.1145/2463372.2463563Search in Google Scholar
15. P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y. P. Chen, A. Auger, and S. Tiwari. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report, Nanyang Technological University, Singapore, 2005.Search in Google Scholar
16. F. van den Bergh and A. P. Engelbrecht. A new locally convergent particle swarm optimiser. In Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC), volume 3, pages 94–99, 2002.Search in Google Scholar
17. C. Witt. Why standard particle swarm optimisers elude a theoretical runtime analysis. In Proceedings of the 10th ACM SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA), pages 13–20, 2009. 10.1145/1527125.1527128.Search in Google Scholar
18. K. Zielinski and R. Laur. Stopping criteria for a constrained single-objective particle swarm optimization algorithm. Informatica, (31):51–59, 2007.10.1109/CEC.2007.4424856Search in Google Scholar
© 2019 Walter de Gruyter GmbH, Berlin/Boston