Swarm Intelligence has obtained great attention in the past years. In particular, with the advent of small, low-cost mobile robots due to the continuing process of hardware miniaturization, it now is possible to approach complex, intricate problems like, e. g., rescuing people from dangerous situations, exploring earthquake areas by smart dust, and special logistics in industrial applications, to name a few. So, swarms of robots or, more generally, of individuals that have to solve a task in a decentralized fashion cooperatively and autonomously have to organize themselves without intervention from the outside. A swarm of individuals can achieve by communicating more than just a group of singletons, showing new, emergent capabilities, like two robots that can take stereo pictures, for example. It is a challenging and important scientific task to endow swarms of individuals with the so-called self-∗ properties like self-organization, self-reconfiguration, self-adaptation, self-assembly, self-healing, self-optimization and so on, and to identify and measure the emergent properties.
When a swarm of individuals solves problems as those mentioned above, its actual successful behavior seems at a first look to be unpredictable, in particular so because the behavior has been determined by its work history and the self-∗ properties. Hence, researchers say that swarm intelligence is at work. Here, the term intelligence includes the well fitting meaning of processing information and retaining the results as knowledge to be applied towards adaptive behavior.
Furthermore, Swarm Intelligence is not only deliberately used in swarms of technical devices. Also in the design of (optimization) algorithms, swarm intelligence can be applied by taking inspiration from swarms of animals. In many real world optimization problems, the actual objective function is not known. E. g., if many pairs of 2D medical images, one from CT, one from MRT, have to be registered, i. e., be aligned in order to make their structures overlay in a meaningful way, the images have to be transformed to optimize a similarity metric. The actual objective function depends on the images and cannot be efficiently optimized by specially designed algorithms. So this is a typical case, where so-called metaheuristics are applied, i. e., methods that get the objective function f as a black box and are searching for an input that optimizes f.
Here, a swarm of (virtual) individuals may scan the search space in the search for or an input x to f that is sufficiently good. Two contradicting goals may be pursued during such a search. It is wanted that on the one hand, the search space is well explored, and on the other hand, it is wanted that there is also exploitation, which means that, under the assumption that in the neighborhood of a good solution candidate there might be even better ones, the swarm searches close to already found good solution candidates. Also here, we can identify the swarm properties described above.
There has been a large zoo of bio-inspired swarm optimization methods developed. Particle Swarm Optimization (PSO), inspired by birds and fishes, Ant Colony Optimization (ACO), Artificial Bee Colonies (ABC), Cockroach-inspired Swarm Evolution (CSE), Cat Swarm Optimization (CSO), and so on. The mentioned methods and many more can easily be found in the literature. Unfortunately, just few theoretical results are known on these methods with respect to convergence speed and the quality of the returned solution.
This special issue of it – Information Technology on Swarm Intelligence contains three articles.
In the first article “Swarm Robotics: Robustness, Scalability, and Self-X Features in Industrial Applications,” the authors present an introduction to swarms consisting of robots. The authors are targeting future industrial applications and discuss in detail the above mentioned self-∗ properties.
As mentioned above, only few theoretical results are known in the area of analyzing metaheuristics. Two very recent groups of theoretical findings are surveyed in the next articles.
The seconds article of this special issue, “Theory of Particle Swarm Optimization: A Survey of the Power of the Swarm’s Potential,” gives an introduction to the state of the art tool for theoretically analyzing the PSO method, namely their measurable potential of a swarm. With this potential, the unwanted phenomenon of premature convergence/stagnation of a swarm can be explained, as well as the proof that PSO converges to a local optimum almost surely (in the mathematical sense). Also for deciding when to stop the swarm’s movement, the potential can be used.
As PSO is a metaheuristic that has been developed for optimizing continuous functions in the first place, there has been some effort, also to apply PSO to discrete optimization problems. The third paper “Runtime Analysis of Discrete Particle Swarm Optimization Algorithms: A Survey” reports on very recent results on Discrete PSO. In particular, the authors explain how they successfully proved actual running times of Discrete PSO when solving the single-source shortest-path problem and other discrete optimization problems. Their goal is to show that the metaheuristic PSO can come close to special designed algorithms with respect to the running time.
I would like to thank the authors of the articles, who prepared and revised their contributions to this special issue, and the anonymous reviewers for their helpful and constructive comments. I also would like to thank Stefan Conrad and the Host Editor of this issue, Richard Lenz, for their support and almost infinite patience.