Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo


IMPACT FACTOR 2017: 0.831
5-year IMPACT FACTOR: 0.836

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

Issues

Volume 13 (2015)

KGSA: A Gravitational Search Algorithm for Multimodal Optimization based on K-Means Niching Technique and a Novel Elitism Strategy

Shahram Golzari
  • Corresponding author
  • Department of Electrical and Computer Engineering, University of Hormozgan, Bandar Abbas, Iran
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Mohammad Nourmohammadi Zardehsavar / Amin Mousavi / Mahmoud Reza Saybani
  • Department of Computer Networks, Markaz-e Elmi Karbordi Bandar Abbas 1, University of Applied Science and Technology, 79199-33153, Bandar Abbas, Iran
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Abdullah Khalili / Shahaboddin Shamshirband
  • Corresponding author
  • Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City, Vietnam
  • Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2018-12-31 | DOI: https://doi.org/10.1515/math-2018-0132

Abstract

Gravitational Search Algorithm (GSA) is a metaheuristic for solving unimodal problems. In this paper, a K-means based GSA (KGSA) for multimodal optimization is proposed. This algorithm incorporates K-means and a new elitism strategy called “loop in loop” into the GSA. First in KGSA, the members of the initial population are clustered by K-means. Afterwards, new population is created and divided in different niches (or clusters) to expand the search space. The “loop in loop” technique guides the members of each niche to the optimum direction according to their clusters. This means that lighter members move faster towards the optimum direction of each cluster than the heavier members. For evaluations, KGSA is benchmarked on well-known functions and is compared with some of the state-of-the-art algorithms. Experiments show that KGSA provides better results than the other algorithms in finding local and global optima of constrained and unconstrained multimodal functions.

Keywords: Gravitational Search Algorithm (GSA); multimodal optimization; K-means; niching methods

1 Introduction

In addition to the need for finding several optima in many applications, solving multimodal problems can be useful at least for two reasons; first, it can increase the chance of finding the global optimum and second, it can help the researcher to become more familiar with the nature of the problem [1]. Population-based (or meta-heuristic) algorithms have been used to solve optimization problems. Some of these algorithms are: Genetic Algorithm (GA) by [2, 3], Simulated Annealing (SA) by [4], Artificial Immune Systems (AIS) by [5], Ant Colony Optimization (ACO) by [6], Particle Swarm Optimization (PSO) by [7], and Gravitational Search Algorithm (GSA) by [8, 9]. Generally speaking, these algorithms are inspired by nature and are effective in solving unimodal optimization problems. However, they have not been very successful in solving multimodal problems. To resolve this, two solutions have been provided: 1- Niching techniques to converge to more than one solution by dividing the main population into non-overlapping areas and 2- Elitism strategy to accelerate the convergence by selecting the best individuals from the current population and its offspring.

In this study, the Gravitational Search Algorithm (GSA) is boosted with the K-means niching and a new elitism strategy called “loop in loop” to make an efficient algorithm (called KGSA) in solving multimodal problems. GSA was selected since it is less dependent on parameters and can find existing optima with less iterations without trapping in local minimum. In addition, K-means clustering technique was chosen for its simplicity, effectiveness and, low time complexity in dividing the main population into non-overlapping subpopulations. Results show that by incorporating K-means and “loop in loop” into the GSA, the proposed algorithm has increased both ‘exploration’ and ‘exploitation’.

This paper is structured as follows: in section 2, some recent works regarding solving multimodal problems are studied. Afterwards, in section 3, GSA, niching concepts and K-means clustering techniques are described. Section 4, describes how the proposed algorithm is designed and how is “loop in loop” used. In section 5, after introducing constrained and unconstrained benchmark functions, evaluation criteria and parameters required for the suggestive algorithm are presented and results of implementation of the suggestive algorithm on the benchmark functions are analyzed, and then the sensitivity of parameter Tl relevant to the suggestive algorithm on some functions is measured. At the end, in section 6, strengths and weaknesses of the proposed algorithm are analyzed and some future works are suggested.

2 Literature review

Solving multimodal problems has always been one of the important and interesting issues for computer science researchers. Authors in [10] presented NichePSO algorithm to solve multimodal problems and showed its efficiency by solving some multimodal functions. In this algorithm, Guaranteed Convergence Particle Swarm Optimization (GCPSO) and Faure-sequences [11] were used to optimize the sub-swarms and the population initialization, respectively. In addition, two parameters δ and μ were defined in this algorithm. If an individual’s variance was less than δ or, in other words, if an individual did not change over several generations, it may be an optimum member. Therefore, the individual and its closest member build a sub-swarm. On the other hand, parameter μ was used to merge the sub-swarms. Results indicated that NichePSO is highly dependent on parameter μ to explore the optimal solutions and this is a major weakness.

Authors in [12] presented Clustering-Based Niching (CBN) method to find global and local optima. The main idea of this method for exploring optima and keeping variety of population was to use sub-populations instead of one population. Species were formed using sub-populations and sub-populations were separated by a density-based clustering algorithm which is appropriate for populations with different sizes and for problems in which the number of clusters is not predetermined. In order to attach two members during the clustering process in CBN, their distance should be less than parameter σdist. Results showed that this algorithm is highly dependent on σdist which is a major drawback.

Authors in [13] presented a method in which PSO and cleansing technique were used. In this method, population was divided into different species based on similarity of its members. Each species was then formed around a dominant individual (or the ‘specie-seed’). In each phase, particles were selected from the whole population and the species were formed adaptively based on the feedback of fitness space. The method was named as Species based Particle Swarm Optimization (SPSO). Although SPSO was proven to be effective in solving multimodal problems with small dimensions, the dependency on species’ radius is among its weaknesses.

Authors in [14] presented an algorithm for solving multimodal optimization problems. This new method was named Multi-Grouped Particle Swarm Optimization (MGPSO) in which, if the number of groups is N, PSO can search N peaks. The efficiency of this method was shown in [14]. Weaknesses of this algorithm are: 1) Determining the number of groups that is determining the number of optimal solutions by user while initializing the algorithm. It is also possible that the function is unknown and no information about the number of optimal points exists. 2) Determining the number of individuals for each group and selecting an appropriate initial value for the radius of each gbest.

Authors in [15] used a new algorithm called NGSA for solving multimodal problems. The main idea was that the initial swarm is divided into several sub-swarms. NGSA used following three strategies: i) an elitism strategy, ii) a K-NN strategy, and iii) amendment of active gravitational mass formulation [15]. This algorithm was applied on two important groups of constrained and unconstrained multimodal benchmark functions and obtained good results, but this algorithm suffers from high dependency on two parameters Ki and Kf.

Authors in [16] proposed Multimodal Cuckoo Search (MCS), a modified version of Cuckoo Search (CS) with multimodal capacities provided by the following three mechanisms: (1) incorporating a memory mechanism which efficiently registers potential local optima based on their fitness value and the distance to other potential solutions, (2) modifying of the original CS individual selection strategy for accelerating the detection process of new local minima, and (3) including a depuration procedure for cyclically elimination of duplicated memory elements. Experiments indicated that MCS provides competitive results compared to other algorithms for multimodal optimization.

Author of [17] modified the original PSO by dividing the original population into several subpopulations based on the order of particles. After this, the best particle in each subpopulation was employed in the velocity updating formula instead of the global best particle in PSO. Evaluations showed that after modifying the velocity updating formula, convergence behaviour of particles in terms of the number of iterations, and the local and global solutions was improved.

Authors in [18] combined exploration mechanism of the Gravitational search algorithm with the exploitation mechanism of Cuckoo search and called their method Cuckoo Search-Gravitational Search Algorithm (CS-GSA). Evaluations on standard test functions showed that CS-GSA converges with less number of fitness evaluations than both Cuckoo Search and GSA algorithms.

Authors in [19] proposed a multimodal optimization method based on firefly algorithm. In their method, the optimal points are detected by evolving each sub-population separately. For determining the stability of sub-populations, a stability criterion is used. Based on the criterion, stable sub-populations are found and since they have optimal points, they are stored in the archive. After several iterations, all the optimums are included in the archive. This algorithm also incorporates a simulated annealing local optimization algorithm to enhance search power, accuracy and speed. Experiments show that the proposed algorithm can successfully find optimums in multimodal optimization problems.

Authors in [20] proposed a niching method for Chaos Optimization Algorithm (COA) called NCOA. Their method utilizes a number of techniques including simultaneously contracted multiple search scopes, deterministic crowding, and clearing for niching. Experiments demonstrated that by using niching, NCOA can compete the state-of-the-art multimodal optimization algorithms.

Authors in [21] presented a novel evolutionary algorithm called Negatively Correlated Search (NCS) which parallels multiple individual search and models the behaviour of each individual search as a probability distribution. Experiments showed that NCS provides competitive results to the state-of-the-art multimodal optimization algorithms in the sense that NCS achieved the best overall performance on 20 non-convex benchmark functions.

Author of [22] presented a modified PSO which in the first step randomly divides the original population into two groups with one group focusing on the maximum optimization of the multimodal function and the other on minimization. After this, each group is divided into subgroups for finding optimum points simultaneously. The important point is that subgroups are not related and each one seeks for one optimum individually. Similar to [17], the velocity updating formula is modified in the proposed method by replacing the best particle of each subgroup instead of the global best. Evaluations on different kinds of multimodal optimization functions and one complex engineering problem demonstrated the applicability of the proposed method.

Authors in [1] incorporated a novel niching method into PSO (named NNGSA) by using Nearest Neighbour (NN) mechanism for forming species inside the population. They also employed the hill valley algorithm without a pairwise comparison between any pair of solutions for detecting niches inside the population. Experiments showed the effectiveness of NNGSA compared to the well-known niching methods.

3 Basic concepts

3.1 Gravitational search algorithm

GSA was presented in [8] based on Newton’s law of gravitation. Agents in this algorithm are similar to particles in the universe. The heavier the mass of an agent, the more efficient is that agent. This means that agents with heavier mass have higher attractions and walk more slowly (Figure 1). The location of an agent i in GSA is shown by Equation (1). Considering a system of N agents, the whole system can be formulated as Equation (2).

Xi(xi1,xi2,,xid,.....,xin)(1)

X(X1,X2,,Xi,.....,XN)(2)

General principle of GSA [8]
Figure 1

General principle of GSA [8]

In the above equations, xid is the location of agent i in dimension d, n is the dimension of the search space and N is the number of individuals (or agents). At a given time t, the applied force on agent i by agent j is calculated using Equation (3).

FijdGtMpit×MajtRijt+ϵ(xjdtxid(t))(3)

where Maj is the active gravitational mass of agent j, Mpi is the passive gravitational mass of agent i, G(t) is gravitational constant at time t, ϵ is a small constant and Rij is the Euclidian distance between the two agents determined by Equation (4).

RijtXit,Xj(t)(4)

Assuming that Kbest is the set of K agents with the best fitness value and thus the biggest mass, the whole applied force to the agent i in dimension d from agents in Kbest is computed by Equation (5).

FidtjϵKbest,jirandjFijd(5)

where randj is a random number in [0,1]. It should be noted that Kbest changes with time, its initial value is K0 and as time passes it decreases. The acceleration of the agent i in dimension d at time t is calculated by Equation (6).

aidtFid(t)Mii(t)(6)

where Mii is the inertia mass of the agent i. In addition, the velocity and position of the agent i in dimension d at time t + 1 are calculated by Equations (7) and (8) respectively. In equation (7), randi is a random number in [0,1].

vidt+1randi×vidt+aid(t)(7)

xidt+1xidt+vidt(8)

Gravitational fixed G is a function of time that is started with the initial value G0 and as time passes, it decreases to control the accuracy of search. The value of this function is calculated by Equations (9) and (10).

Gt(G0,t)(9)

G(t)G0eαtT(10)

where α and G0 are fixed values and T indicates all iterations. In addition, inertia and gravitational masses are updated using Equations (11)(13)

Mai=Mpi=Mii=Mii=1,2,,N(11)

mitfititworst(t)besttworst(t)(12)

Mitmitj=1Nmjt(13)

where fiti(t) shows fitness value of the agent i at time t and worst(t) and best(t) are calculated using Equations (14) and (15).

besttmaxj{1,..,N}fitjt(14)

worsttminj{1,..,N}fitjt(15)

3.2 Niching

As mentioned previously, niching is the concept of dividing the search space into different areas or niches in a way that these areas are not overlapped. Evolutionary algorithm with niching technique search for optimal solutions in each separated area to efficiently explore the search space for finding global optimal solutions. Some of the well-known niching techniques are: fitness sharing [23], clearing [24], crowding [25], deterministic crowding [26, 27, 28] and probabilistic crowding [29].

3.3 K-means algorithm

K-means is employed as the niching technique in KGSA. The most important parameter in K-means is the number of clusters which is manually determined by the user. Clusters are represented by their centers which are randomly selected at the beginning of K-means. Then, K-means works as follows: each point (or agent) is assigned to the closest center, and in this way members of each cluster are determined. The mean of members in each cluster is calculated as the new center of that cluster (Equation 16). This process continues until the maximum number of iterations is reached or the members of clusters do not change.

ck1SkXiSkXi(16)

In the above equation, Xi is agent i which belongs to cluster Sk, ck is the center of Sk and ∣Sk∣ is the number of members (agents) in Sk. A pseudocode of K-means algorithm is shown in Figure 2. Simplicity, flexibility and being easy to understand are among the advantages of this algorithm, but determining the number of clusters at the beginning of the algorithm is a weakness. In addition, since initial centers are randomly selected, results are different in different runs.

Pseudocode of K-means algorithm for clustering population members
Figure 2

Pseudocode of K-means algorithm for clustering population members

4 Proposed algorithm

GSA, K-means and a new elitism strategy called “loop in loop” are used to design the suggestive algorithm. This algorithm is called KGSA (K-means gravitational search algorithm) owing to the use of K-means and GSA. In this algorithm, firstly, the members of the initial population are clustered by K-means. Afterwards, the population is created and divided into different niches or clusters, the reinforced GSA with “loop in loop” technique guide the members of each niche to the optimum direction according to their clusters. More specifically, members of each cluster apply force to each other so that lighter members move towards the optimum direction of each cluster with higher velocity and heavier ones move to the direction slower. The principle of KGSA is shown in Figure 3. Different parts of the KGSA are separately described in order to explain more details.

General principle of KGSA
Figure 3

General principle of KGSA

4.1 Population initialization

Before initializing the population, the structure of individuals must be specified. In this research, phenotypic structure is used where each member in each dimension gets a numerical value. To form the initial population, uniform and partition methods adopted from [15] are used. In the uniform method, members are initialized using Equation (17):

xid=Low+rand(HighLow)(17)

where xid is the location of agent (member) i in dimension d, Low and High are respectively the lowest and highest possible values in dimension d, and rand is a random number in [0,1]. Although the uniform method is simple, members initialized with this approach may be placed close to each other resulting in poor distribution in the search space.

In the partition method, the legal [Low, High] period in each dimension d is first divided into N smaller sub-periods with equal length where N is the size of population. Then for each dimension d, members are assigned to different sub-periods and similar to Equation (17), get a random value in that sub-period. With this approach, the main problem of the uniform method is solved and members are better distributed in the search space.

4.2 Population reformation

Population reformation is used for constrained functions so that infeasible solutions are avoided. It is clear that infeasible solutions may be produced when the initial population or new population are created. In the first case, the infeasible solutions are replaced by new solutions (members) created over and over again using uniform or partition methods. This process continues until possible solutions are found. In the latter case, the infeasible solutions are replaced by the most fitted members of the previous generation [30].

4.3 Production of appropriate clusters from population

Inappropriate clusters when initializing the population are either single-member or empty clusters. In other phases of KGSA, only empty clusters are considered inappropriate. Depending on the type of population initialization especially uniform method, some clusters may be inappropriate after executing the K-means algorithm, resulting in losing niches. To resolve this problem, when a cluster is inappropriate in population initialization, the population is rejected and the initial population is formed and clustered again until some appropriate clusters are created. But in other phases of KGSA, clustering is iterated until none of the clusters are inappropriate.

4.4 Calculation of mass, force, velocity and production of the next generation

Before calculating the mass of each member (agent), its neighbours should be determined. Since members have been clustered before, other members within the cluster of each member are considered as its neighbours. Then, using Equations (11)-(13), the mass of each member is calculated. After the mass of all members were specified, according to the Equation (5), the total force on agent i in dimension d is determined. Applied force to a member is only from its neighbours. When members apply force to each other, each member moves towards a direction with different velocity which can be calculated using Equation (7). As a result of the movement, new population is created. As will be discussed in the next section, this new population competes with the candidate members of the current population to form the next generation.

4.5 Use of innovative method “loop in loop” and discovery of optima

As described in the previous section, after the current population was developed by Tl iterations using GSA algorithm, the new population is created (section 4.4). Since this process needs Tl iterations, it is called the first (or inner) loop in the “loop in loop” method. Then, the most fitted members of the current population are selected as candidates (Section 4.5.1) and they compete with members of the new population (Section 4.5.2) to form the initial population for the next generation. This process continues until the maximum number of repetitions or the maximum number of permitted evaluations is reached. This is actually the second (or outer) loop of the “loop in loop”.

4.5.1 Selecting the candidate members from the current population

Since after developing the current population via Tl iterations using GSA, members of the current population are closer to the optimal solutions, the two closest members of each cluster (niche) are found, the most fitted one is selected and kept and the other one is removed. This process continues until the number of selected members reaches the number of optimal solutions (niches). These selected members along with the most fitted members of the current population (i.e. the best individual in the whole population and those having at least 80% of the fitness of the best individual) are candidate members for the next generation. The described process is based on the work of [30] and illustrated in Figure 4.

Preparing candidate list at the end of inner loop of “loop in loop”
Figure 4

Preparing candidate list at the end of inner loop of “loop in loop”

4.5.2 Forming the next generation by selecting the best members of candidates and the new population

The candidate members of the current population compete with the new population in the following way: for each candidate, the closest member of the new population is found. If the candidate is more fitted, it replaces the closest member of the new population; otherwise, it is ignored (Figure 5). After all the candidates were checked, the initial population for the next generation has been formed, so the next iteration of the inner loop of the “loop in loop” is started. It is worth to mention that when going from one generation to the next, niches are also transferred to the next generation.

Forming the initial population of the next generation by selecting the most fitted members of the candidate list and the new population
Figure 5

Forming the initial population of the next generation by selecting the most fitted members of the candidate list and the new population

4.6 Conditions of ending the “loop in loop” algorithm

As illustrated in Figure 3, the maximum number of generations in “loop in loop” algorithm is fixed number T, but for the K-means algorithm, stopping criteria is when, the center of clusters does not change anymore.

5 Experimental results

To evaluate KGSA, some standard constrained and unconstrained functions are used. These functions are extracted from [15]. Results of benchmarking KGSA on these functions are compared to a number of algorithms including r3PSO [31], r2PSO-lhc [31], r3PSO-lhc [31], SPSO [32], FER-PSO [33], NichePSO [10], deterministic crowding [27], sequential niche [34], NGSA [15], NCOA [20], Firefly [19], and NNGSA [1]. It should be noted that some benchmark functions of this study have not been used in NCOA [20], Firefly [19], and NNGSA [1] and thus, the corresponding cells in tables showing the results are empty.

At the end of section 5, KGSA is statistically compared with other algorithms using Friedman test to show whether this algorithm is statistically superior or not.

5.1 Constrained and unconstrained benchmark functions

Unlike the constrained functions, when the domain is not considered in unconstrained functions, solutions do not follow a specific pattern. Tables 1A and 2A in Appendix show, respectively, unconstrained and constrained benchmark functions used in this study. Some of these benchmark functions along with the position of their optimal solutions are presented in Figures 6-9.

Shekel’s Foxholes function
Figure 6

Shekel’s Foxholes function

Inverted Vincent function
Figure 7

Inverted Vincent function

Inverted Shubert function
Figure 8

Inverted Shubert function

Himmelblau’s function
Figure 9

Himmelblau’s function

Table 1A

Unconstrained test functions in the experiments (see [15])

Table 2A

Constrained test functions in the experiments (see [15])

5.2 Evaluation Criteria of Algorithms

The KGSA algorithm is run multiple times and results of these independent runs are averaged. The evaluation criteria are success rate, error rate, and the number of fitness evaluations.

5.2.1 Success rate

In each run of the algorithm, if all peaks are found, the run is successful. When an agent reaches 99% of its highest value, it is considered as to have reached the peak value [15]. In addition, for a fair comparison, in cases that the error rate is reported, they are taken into account in KGSA. The success rate or the Average Discovering Rate (ADR) is calculated by averaging the results of different runs.

5.2.2 Error rate

Suppose that an algorithm finds a number of optimal solutions in an n dimensional space. The error rate is the mean of the Euclidian distance of the position of the discovered solutions from the position of the real optimum and is calculated using Equation (18).

ζ1mi=1mj=1nsijφij212(18)

where m is the number of the optimal solutions found by the algorithm and n is the dimension of the problem. In addition, s indicates the position of the optimum found by the algorithm and φ is the position of the real optimum of the problem. Similar to the ADR, error rate for different runs are averaged.

5.2.3 Number of fitness evaluations

In each generation, when the algorithm finds the niches, number of times that the evaluation function has been called is reported as the number of fitness evaluation. In another words, this number is the number of times that fitness agent has been calculated from the beginning. Similar to success and error rates, this value is also averaged for different runs.

5.3 Parameters and results of other algorithms

The results and parameters of other algorithms including the number of population members or N, number of iteration and maximum number of evaluations can be obtained from [15].

In KGSA, K is the number of clusters and is considered to be the total number of optimal solutions. In cases that the number of optimal solutions is higher than 25, K is equal to the number of the optima that must be discovered. G0 is set to 0.1 of the domain value and α is equal to 8. In both iterations, K-means algorithm is executed once. In each generation, 70% of the best individuals of each cluster apply force to co-cluster members. Other values from 20% to 95% (with step size 5%) were also tested for this. However, experiments showed that values lower than 70% result in virtual (incorrect) niches and values higher than 85% lead to losing some actual niches. Thus, the least appropriate value (70%) was selected since in addition to providing the desired results, it has less computational cost.

5.4 Finding optimal solutions

5.4.1 Finding global optimum in unconstrained functions

For finding global optima, KGSA was executed on functions F1 to F12 with partitioning initialization. Success rate is reported in Table 1 and the number of fitness evaluations for discovering the global optima are reported in Tables 3A and 4A in Appendix. Table 3A in Appendix compares KGSA with the well-known swarm-based methods for multimodal optimization and Table 4A in Appendix presents the result of comparison with NNGSA and some of the other evolutionary approaches. It should be mentioned that results are averaged over 50 runs. In Table 2, the required parameters for algorithms NGSA and KGSA are shown. Unlike the other algorithms, when executing NGSA on functions F6 to F9, the number of the initial population was set to 100 instead of 50, but the maximum number of iterations was decreased such that the maximum number of all evaluations is not exceeded.

Table 1

Comparison of found global maximum (%ADR) obtained with KGSA, NGSA, r2PSO, r3PSO, r2PSO-lhc, r3PSO-lhc, FER-PSO and SPSO using functions F1 to F12. The results were averaged after fifty independent runs of algorithms.

Table 2

List of required parameters for discovering global optima in unconstrained functions

Table 3A

Mean value of fitness function assessments needed to converge for each PSO based algorithm for the results shown in Table 1. The results were averaged over fifty independent runs of algorithms

Table 4A

Average number of fitness function evaluations required to converge for each of the evolutionary algorithms. The results were averaged over fifty independent runs of algorithms

As illustrated in Table 1, the KGSA has performed much better than the well-known swarm-based algorithms in discovering all global optima for all functions. Table 3A in Appendix shows that the results of Table 1 were obtained with less number of evaluations in most cases. Furthermore, as can be seen in Table 2, the initial population size for KGSA was lower in all cases, except for function F12.

5.4.2 Finding local and global optima in unconstrained functions

In order to find global and local optima, KGSA is developed with different population sizes in at most 120 generations. Results of this evaluation is compared with that of NGSA algorithm with two different initializations: 1) uniform initialization where the comparative results are presented in Table 3 and 2) partitioning initialization where the results are shown in Table 4. Table 3 shows that the proposed algorithm performed much better when the initialization is uniform. However, with partitioning initialization, both KGSA and NGSA algorithms were equally successful in finding the global and local optima. In order to fine-tune the population size, it was set to 20 and the error rates of KGSA and NGSA were recorded in Table 5. As can be seen from this table, KGSA had lower error rate than NGSA for both initialization methods in this setting. The results were averaged for 30 independent runs of both algorithms. It should be noted that Tl for all experiments was set to 15.

Table 3

%ADR obtained with the KGSA and NGSA using different population sizes, and uniform initialization for finding all local and global maxima.

Table 4

%ADR obtained with the KGSA and NGSA using different population sizes and partitioning initialization for finding all local and global maxima.

Table 5

The mean error ζobtained with the KGSA and NGSA using N=20, T=120

Considering the results presented in Tables 3-5, one can see that KGSA outperforms NGSA in terms of finding global and local optima and lower error rate. In addition, when reducing the population size, NGSA was sensitive to the initialization type but not the KGSA algorithm since it uses the “loop in loop” method resulting in a larger search space.

In order to evaluate the algorithms in other settings and on more functions, the number of members was set to 20, maximum numbers of iterations was set to 2000, initialization method was partitioning and evaluations were performed on functions F1 to F5. Results of this evaluation for KGSA are presented in Tables 6 and 7, and results of other algorithms can be obtained from [15]. Table 6 shows that the proposed algorithm has performed better than other algorithms in terms of finding global and local optima. The number of necessary evaluations for finding optimal solutions is reported in Table 7. This table shows that for 100% discovering the optima, KGSA needed less number of evaluations than the other algorithms. Furthermore, Table 5A in Appendix shows that KGSA achieved the results presented in Tables 6 and 7 with lower population size and lower “maximum number of iterations” compared to the other algorithms.

Table 6

Comparison of KGSA, NGSA, deterministic crowding, sequential niche and NichePSO algorithms in terms of %ADR for functions F1 to F5. The results were averaged after thirty independent runs of algorithms.

Table 7

Average number of fitness function assessments needed to converge for each niching algorithm for functions F1 to F5. The results were averaged after thirty independent runs of algorithms.

Table 5A

List of required parameters for discovering local and global optima in Tables 6, 7

Also, the KGSA algorithm was applied to functions F6 to F10 using partitioning initialization. Results of this comparison are presented in Tables 8 and 9. The first table shows comparisons with NGSA and the latter presents comparisons with NNGSA. As can be seen from Table 8, KGSA has higher success rate in all situations, and the lower error-rate and the “number of evaluations” in most cases. Moreover, Table 6A in Appendix shows that results presented in Table 8 were achieved with lower population size and lower “number of evaluations” than those of NGSA. All parameters and results for NGSA and KGSA were adopted from [15].

Table 8

Results of KGSA and NGSA in terms of %ADR and the number of fitness assessments needed to converge for each niching algorithm using functions F6 to F10. The results were averaged after fifty independent runs of algorithms.

Table 9

Results of KGSA and NNGSA in terms of %ADR and the number of fitness assessments needed to converge for each niching algorithm using functions F6 to F10. The results were averaged after fifty independent runs of algorithms.

Table 6A

List of required parameters for discovering local and global optima in Table 8

5.4.3 Finding global optima in constrained functions

KGSA was also evaluated on constrained functions F13 to F15. Average results of 50 runs using partitioning initialization are shown in Table 10. This table illustrates that KGSA achieved 100% success rate for all the constrained functions in addition to having a lower error-rate than the other algorithms in most cases. Moreover, this algorithm had the lowest “number of evaluations” on two out of three of these functions. In case of function F15, “number of evaluations” for KGSA was only greater than that of NGSA. Table 7A in Appendix shows parameter settings used for these experiments and indicates that results of KGSA were achieved with lower population size and less iterations.

Table 10

Performance comparison when constrained test functions were used.

Table 7A

List of required parameters for discovering global optima in Table 10

5.5 Sensitivity analysis of parameter Tl in KGSA

In this section, sensitivity of KGSA to Tl is evaluated. As stated previously, Tlrepresents the number of repetitions of the inner loop of “loop in loop” method. This parameter can affect the performance of KGSA since loops have a major role in finding optima; therefore, correct setting of Tl is very important. If Tl is too small, few generations will be created in a loop and therefore, the population will not develop adequately and candidates with lower fitness will go to the next generation. On the other hand, if Tl is too large, the number of loops will reduce and consequently, search space will be small. Therefore, it is necessary to select an appropriate value for Tl. For doing this, an experiment was conducted to evaluate KGSA on functions F1 to F5with different values for Tl. In this experiment, the population number was set to 20, the first population was initialized by the partitioning method, and the maximum number of generations was set to 120. Result of 50 independent runs of KGSA in this setting is shown in Table 8A in Appendix. As can be seen from this table, KGSA success rate on F1to F4 has not been changed for different values of Tl. However, for function F5, result is different. When Tl is 10, the success rate is 96% which indicates that the number of repetitions for building the population is not enough. On the other hand, when Tl is 40 and 60, the success rate is 98% and 94% respectively. This points out the excessive number of repetitions in a loop for building the population. This means that the search space has been reduced in this case. Overall, it can be concluded that the success rate is not significantly affected by Tl, showing the low sensitivity of KGSA to Tl.

Table 8A

Success rate of the KGSA for the different values of Tl

5.6 Statistical Tests

In the final test, KGSA is statistically compared with other methods. For this purpose, Friedman test has been performed on possible metrics between KGSA and other methods. It is known that Friedman test is based on the chi-square distribution with for analyzing the statistical significant difference between the results of several methods [35]. The Friedman test is a non-parametric statistical test for ranking the algorithms and evaluating whether their results are statistically significantly different or not. It computes a score for each algorithm on a specific criterion and finally ranks them based on the scores. In the Friedman test, the null hypothesis states that the methods are not statistically significantly different. If the p-value is less than a predetermined level, null hypothesis is rejected which shows that results of the methods are significantly different. In this paper, p-value was set to 0.05.

Results of Friedman test are provided in Table 11. In this table, the best method in each column is shown in bold face and the second best is underlined. This table indicates that KGSA and Firefly [19] algorithms are the most successful. It must be noted that since we did not have access to the codes of other algorithms and some of the benchmark functions used in this study were different, all tests could not be performed on all algorithms and thus, some of the cells in Table 11 are empty.

Table 11

Friedman test on KGSA and the other algorithms

6 Conclusions and Future Works

In this paper, KSGA, a novel Gravitational Search Algorithm for multimodal problems was proposed. KSGA incorporated k-means and a new elitism strategy called “loop in loop” into the conventional Gravitational Search Algorithm (GSA). First in KGSA, the initial population was clustered by K-means and after that, the first population was created by selecting the members from different clusters (niches). This resulted in a large search space and thus increased the chance of finding local and global optima in KGSA. “loop in loop” technique was used to guide the members of each niche to the optimum direction according to their clusters. With these modifications, KGSA does not need the following items: the type of population initialization and parameter “radius of niche”. Evaluations on different benchmark functions showed that KGSA is superior to other GSA based evolutionary algorithms in finding both local and global optima. We intend to exploit fuzzy methods to determine the number of optima at the beginning of KGSA and incorporating the “loop in loop” technique into unimodal problems for future works.

References

  • [1]

    Haghbayan P., Nezamabadi-Pour H., Kamyab S., A niche GSA method with nearest neighbor scheme for multimodal optimization, Swarm and Evolutionary Computation, 35, 2017, 78-92 Web of ScienceCrossrefGoogle Scholar

  • [2]

    Subhrajit R., Minhazul I., Das S., Ghosh S., Multimodal optimization by artificial weed colonies enhanced with localized group search optimizers, Applied Soft Computing, 13, 2013, 27-46 CrossrefWeb of ScienceGoogle Scholar

  • [3]

    Tang K. S., Man K., Kwong S., He Q., Genetic algorithms and their applications, IEEE Signal Processing Magazine, 13, 1996, 6, 22-37 CrossrefGoogle Scholar

  • [4]

    Kirkpatrick S., Vecchi M., Optimization by simmulated annealing, Science, 220, 1983, 4598, 671-680 CrossrefGoogle Scholar

  • [5]

    Farmer J. D., Packard N. H., Perelson A. S., The immune system, adaptation, and machine learning, Physica D: Nonlinear Phenomena, 22, 1986, 1, 187-204 CrossrefGoogle Scholar

  • [6]

    Dorigo M., Maniezzo V., Colorni A., Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 26, 1996, 1, 29-41 CrossrefGoogle Scholar

  • [7]

    Kenndy J., Particle Swarm Optimization, in “Encyclopedia of machine learning”, Sammut C., Webb G. I., eds., Springer, Boston, MA, 2011, 760-766 Google Scholar

  • [8]

    Rashedi E., Nezamabadi-Pour H., Saryazdi S., GSA: a Gravitational Search Algorithm, Information sciences, 179, 2009, 13, 2232-2248 CrossrefWeb of ScienceGoogle Scholar

  • [9]

    Rashedi E., Nezamabadi-Pour H., Saryazdi S., BGSA: binary gravitational search algorithm, Natural Computing, 9, 2010, 3, 727-745 Web of ScienceCrossrefGoogle Scholar

  • [10]

    Brits R., Engelbrecht A. P., Van den Bergh F., A niching particle swarm optimizer, In: Proceedings of the 4th Asia-Pacific conference on simulated evolution and learning, Paper Presented at Proceedings of the 4th Asia-Pacific conference on simulated evolution and learning (Orchid Country Club, Singapore), 2002, November 18, Orchid Country Club, 692-696 Google Scholar

  • [11]

    Thiemard E., Economic generation of low-discrepancy sequences with a b-ary Gray code, Technical Report, 1998, Department of Mathematics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland Google Scholar

  • [12]

    Streichert F., Stein G., Ulmer H., Zell A., A clustering based niching method for evolutionary algorithms, In: Genetic and Evolutionary Computation Conference, Paper Presented at Genetic and Evolutionary Computation Conference (Chicago, USA), 2003, July 9-11, Springer, 644-645 Google Scholar

  • [13]

    Li X., Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for multimodal function optimization, In: Genetic and Evolutionary Computation Conference, Paper Presented at Genetic and Evolutionary Computation Conference, (Seattle, USA), 2004, June 26-30, Springer, 105-116 Google Scholar

  • [14]

    Seo J. H., Im C. H, Heo C. G., Kim J. K., Jung H. K., Lee C. G., Multimodal function optimization based on particle swarm optimization, IEEE Transactions on Magnetics, 42, 2006, 4, 1095-1098 CrossrefGoogle Scholar

  • [15]

    Yazdani S., Nezamabadi-Pour H., Kamyab S., A gravitational search algorithm for multimodal optimization, Swarm and Evolutionary Computation, 14, 2014, 1-14 Web of ScienceCrossrefGoogle Scholar

  • [16]

    Cuevas E., Reyna-Orta A., A cuckoo search algorithm for multimodal optimization, 2014, The Scientific World Journal, 2014 Google Scholar

  • [17]

    Chang W. D., A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems, Applied Soft Computing, 33, 2014, 170-182 Web of ScienceGoogle Scholar

  • [18]

    Naik M. K., Panda R., A new hybrid CS-GSA algorithm for function optimization, In: Proceedings of the IEEE International Conference on Electrical, Electronics, Signals, Communication and Optimization, Paper Presented at IEEE International Conference on Electrical, Electronics, Signals, Communication and Optimization, (Visakhapatnam, India), 2015, January 24-25, IEEE, 1-6 Google Scholar

  • [19]

    Nekouie N., Yaghoobi M., A new method in multimodal optimization based on firefly algorithm, Artificial Intelligence Review, 46, 2016, 2, 267-287 Web of ScienceCrossrefGoogle Scholar

  • [20]

    Rim C., Piao S., Li G., Pak U., A niching chaos optimization algorithm for multimodal optimization, Soft Computing, 22, 2016, 2, 621-633 Web of ScienceGoogle Scholar

  • [21]

    Tang K., Yang P., Yao S., Negatively correlated search, IEEE Journal on Selected Areas in Communications, 34, 2016, 3, 542-550 CrossrefWeb of ScienceGoogle Scholar

  • [22]

    Chang W. D., Multimodal function optimizations with multiple maximums and multiple minimums using an improved PSO algorithm, Applied Soft Computing, 60, 2017, 60-72 Web of ScienceCrossrefGoogle Scholar

  • [23]

    Goldberg D. E., Richardson J., Genetic algorithms with sharing for multimodal function optimization, In: Proceedings of the Second International Conference on Genetic Algorithms and their applications, Paper Presented at the Second International Conference on Genetic Algorithms and their applications, (Cambridge, USA), 1987, Hillsdale: Lawrence Erlbaum, 41-49 Google Scholar

  • [24]

    Pétrowski A., A clearing procedure as a niching method for genetic algorithms, In: Proceedings of IEEE International Conference on Evolutionary Computation, Paper Presented at Proceedings of IEEE International Conference on Evolutionary Computation, (Nagoya, Japan), 1996, IEEE, May 20-22, 798-803 Google Scholar

  • [25]

    De Jong K. A., An Analysis of the Behavior of a Class of Genetic Adaptive Systems, In: Techincal Report, 1975, Computer and Communication Sciences Department, College of Literature, Science, and the Arts, University of Michigan Google Scholar

  • [26]

    Mahfoud S. W., Simple Analytical Models of Genetic Algorithms for Multimodal Function Optimization, In: Proceedings of the 5th International Conference on Genetic Algorithms, Paper Presented at Proceedings of the 5th International Conference on Genetic Algorithms (San Fransisco, USA), 1993, San Fransisco: Morgan Kaufman Publishers, 643 Google Scholar

  • [27]

    Mahfoud S. W., Crossover Interactions Among Niches, In: Proceedings of the First IEEE Conference on Evolutionary Computation, Paper Presented at the First IEEE Conference on Evolutionary Computation, (Orlando, USA), 1994, June 27-29, IEEE, 188-193 Google Scholar

  • [28]

    Mahfoud S. W., Niching methods for genetic algorithms, Urbana, 51, 1994, 95001, 62-94 Google Scholar

  • [29]

    Mengshoel O. J., Goldberg D. E., Probabilistic crowding: Deterministic crowding with probabilistic replacement. In: Genetic and Evolutionary Computation Conference, Paper Presented at Genetic and Evolutionary Computation Conference, (San Fransisco, USA), 1999, July 8-12, San Fransisco: Morgan Kaufman Publishers, 409 Google Scholar

  • [30]

    Kimura S., Matsumura K., Constrained multimodal function optimization using a simple evolutionary algorithm, In: IEEE Congress on Evolutionary Computation, Paper Presented at IEEE Congress on Evolutionary Computation, (New Orleans, USA), 2011, June 5-8, IEEE, 447-454 Google Scholar

  • [31]

    Li X., Niching without niching parameters: particle swarm optimization using a ring topology, IEEE Transactions on Evolutionary Computation, 14, 2010, 1, 150-169 Web of ScienceCrossrefGoogle Scholar

  • [32]

    Parrott D., Li X., Locating and tracking multiple dynamic optima by a particle swarm model using speciation, IEEE Transactions on Evolutionary Computation, 10, 2006, 4, 440-458 CrossrefGoogle Scholar

  • [33]

    Li X., Multimodal function optimization based on fitness-euclidean distance ratio, In: Genetic and Evolutionary Computation Conference, Paper Presented at Genetic and Evolutionary Computation Conference, (Washington, USA), 2007, June 25-29, New York: ACM, 78-85 Google Scholar

  • [34]

    Zhang J., Zhang J. R., Li K., A sequential niching technique for particle swarm optimization, In: Advances in Intelligent Computing, Paper Presented at Advances in Intelligent Computing, (Hefei, China), 2005, August 23-26, Berlin Heidelberg: Springer-Verlag, 390-399 Google Scholar

  • [35]

    García S., Fernández A., Luengo J., Herrera F., Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power, Information Sciences, 180, 2010, 10, 2044-2064 Web of ScienceCrossrefGoogle Scholar

About the article

Received: 2018-02-14

Accepted: 2018-08-02

Published Online: 2018-12-31


Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 1582–1606, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0132.

Export Citation

© 2018 Golzari et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in