EEG signal classi ﬁ cation based on SVM with improved squirrel search algorithm

: Electroencephalography (EEG) is a complex bioelectrical signal. Analysis of which can provide researchers with useful physiological information. In order to recognize and classify EEG signals, a pattern recognition method for optimizing the support vector machine (SVM) by using improved squirrel search algorithm (ISSA) is proposed. The EEG signal is preprocessed, with its time domain features being extracted and directed to the SVM as feature vectors for classification and identification. In this paper, the method of good point set is used to initialize the population position, chaos and reverse learning mechanism are introduced into the algorithm. The performance test of the improved squirrel algorithm (ISSA) is carried out by using the benchmark function. As can be seen from the statistical analysis of the results, the exploration ability and convergence speed of the algorithm are improved. This is then used to optimize SVM parameters. ISSA-SVM model is established and built for classification of EEG signals, compared with other common SVM parameter optimization models. For data sets, the average classification accuracy of this method is 85.9%. This result is an improvement of 2 – 5% over the comparison method.


Introduction
Electroencephalography (EEG) signals were first detected by Berger in the human brain in 1929, after which more research has been carried out on brain electrical signals, including brain-computer interface (BCI).A BCI (or brainmachine interface [BMI]) is a device that constructs information channels with external devices by decoding human brain neural activity information [1,2].In recent years, faster computer hardware, machine learning, and application of neuroscience have made research in BMI technology become an emerging trend [3].BCI has been widely applied in disease diagnosis, motion recognition, character spelling, and so on [4,5].Soekadar et al. [6] made a study of An EEG/EOG-based hybrid brain-neural computer interaction (BNCI) system.They used the system to help disabled people to control exoskeletons.
BCI system is generally goes through two stages: offline training and online operation.Offline stage refers to the improvement of the model, and the online stage refers to the conversion of subjects' real-time EEG signals into commands to operate external devices [7].In general, most BCI rely on classification algorithms and signal analysis [8].Analysis of EEG signals has a certain impact on the subsequent classification effects.EEG signals are usually analyzed from the time domain, the frequency domain, or the time-frequency domain.Gunes et al. [9] [10].Sharma et al. [11] used the empirical model decomposition (EMD) method to extract the different entropy of EEG signals as features, classification and recognition of them have obtained very accurate results.Schlögl et al. [12] proposed that mutual information of the feedback might be important for BCI.Researcher can analyze entropy to measure the BCI experiment.In addition to the study of signal analysis methods, many researchers also pay attention to the improvement and innovation of classification algorithms.
The classification algorithms commonly used in BCI systems are divided into linear classifiers and neural networks [13].Linear method is mainly linear discriminant analysis (LDA) [14], a model of probability density function is generated based on the data of each class, and the new data is classified by this model.LDA is a classification algorithm based on this idea [15].Hsu et al. [16] proposed an adaptive BCI system model based on adaptive LDA to classify EEG signals.The results show that this method has a good application in adaptive systems.Neural networks [17] and nonlinear SVM [18] are the two main nonlinear methods.Nihal et al. [19] used recurrent neural networks to classify the EEG signals of epileptic patients at onset and normal onset, and to predict the onset time.Another nonlinear method is the support vector machin (SVM), which was proposed by Vapnik et al. [20] in 1995.It is a supervised machine learning algorithm and has broad applications in image recognition, text classification, and BCI [21][22][23][24][25][26].In addition, with the availability of big data and the development of graphic processing unit (GPU) in recent years, deep learning has been widely applied to classification problems [27].Robin et al. [28] proposed to use deep learning with convolutional neural networks (CNN) for solving EEG decoding and visualization.Based on the comparison of different experiments and methods, ConvNets are considered to be an effective method to solve EEG decoding and visualization.Jirayucharoensak et al. [29] used a deep learning network (DLN) to solve the problem of emotion recognition.The method performed better compared with other classification algorithm.
SVM is one of the more mature classification algorithms.It has a good performance in classification problem [30].Yeo et al. [31] thought it is a good candidate to use SVM to analysis and detection of drowsiness EEG.Compared with other traditional classification algorithms such as k-nearest neighbor (KNN) and multi-layer perceptron (MLP), SVM has lower computational complexity [32,33].Although the computational complexity of KNN decreases with the increase of parameter K, its classification accuracy also decreases [34,32].In recent years, with the advent of the era of big data, deep learning has become a research hotspot [35].However, SVM still has certain competitiveness compared with it.The most significant feature of SVM is that it is not a black box method and has a solid mathematical theoretical basis.In addition, SVM has fewer parameters and is easier to adjust than deep learning.But it's worth noting that many researchers in the current study are working to help neural networks get out of the process of adjusting parameters.When the size of data sets is general, the performance of deep learning is not necessarily higher than that of SVM, and the computational efficiency is lower than that of SVM.
Admittedly, the performance advantage of deep learning is obvious when the data scale is very large [36,37].Though SVM is a classification algorithm that develops for a long time, there still have insufficiency [38][39][40].In SVM, parameters need to be adjusted manually, and different parameters will impact on classification results.Therefore, parameter optimization of SVM has always been the research direction of SVM.SVM has two important parameters, the penalty factor (c) and the kernel function parameters (g).Selection of these two parameters exerts a great impact on the classification effect.An ordinary method is to use the optimization algorithm to optimize the parameters of the SVM which further optimizes the classification effect.Huang et al. [38] proposed a genetic algorithm to optimize SVM parameters.In comparison with the grid optimization method, the classification accuracy of a genetic algorithm is significantly better after using test set tests.Wang et al. [39] used particle swarm optimization (PSO) combined with SVM method to predict the daily minimum temperature and achieved high performance.Xing et al. [40] proposed an emotion-driven retrieval system based on DE-SVM, which has better performance than Back Propagation (BP), Linear Regression (LR), and other methods and proves the feasibility of the method.Wang et al. [41] use ACQIMO to optimize the SVM for pattern recognition of EMG signals.It can be seen that different optimization algorithms will have different SVM parameter optimization results.
Due to different principles and mechanisms of the difference of intelligent optimization algorithms, the result will thereby be a gap in different applications.In practice, selection of optimization algorithms that combine with SVM deserves consideration.Squirrel search algorithm (SSA) [42] is a new optimization algorithm, which has the advantages of fast convergence and high accuracy.SSA is simple and efficient; and the experimental simulation results showed that SSA has better performance than the current popular optimization algorithm [42].Walaa et al. [43] applied SSA to the Bin packing problem, and proposed an improved method to apply to low-dimensional bin packing problem, which boosted the accuracy and execution time of the solution compared with similar algorithms.Though the SSA is simple and efficient, it still has some shortcomings.In this paper, the SSA is improved, and combined with SVM.Improved squirrel search algorithm (ISSA) is used to optimize the parameters of SVM and increase the classification accuracy of SVM.
The content of this paper is divided into the following four sections.The Section "Squirrel search algorithm" introduces the basic SSA.The Section "Improvement of squirrel search algorithm" discusses our improvement.And in this section, ISSA and SVM are combined to propose the ISSA-SVM model.The Section "Simulation experiment" is divided into two parts, the first, the ISSA is applied to test functions and compared with common optimization algorithm.Then, after EEG data preprocessing and feature extraction, the ISSA-SVM model was used to identify and classify EEG signals, and compared with similar methods.Finally, classification accuracy and model score were used as measurement indexes.The summary and deficiencies of this research content will be discussed in the Section "Conclusion".

Squirrel search algorithm
The SSA was proposed by Mohit et al. in 2018.This algorithm simulates the dynamic foraging behavior of the southern squirrel [42].In the SSA [42], it is assumed that there are only one hickory nut tree, several oak trees, and most normal trees in a forest, with each tree owning a squirrel.The hickory nut tree, oak trees, and normal trees respectively represent the optimal, suboptimal, and general solutions.Squirrels will jump between different trees until they find the best food source, the hickory nut tree, which indicates the best solution.During their food seeking, squirrels alter their positions, which represent update of solutions.Mathematically, an n-dimensional vector represents one squirrel's position, with each of its values indicating one position variable.
In SSA [42], Eq. ( 1) is used to generate an initial position with a uniform random distribution.After assessing the fitness of the population location, these locations are ranked in a descending order.According this order, the population is divided into three groupsthe best solution, the suboptimal solution and the general solution.
In Eq. ( 1), FS i refer to the position of the ith squirrel, which is an n-dimensional vector, U(0, 1) is a uniform distribution across [0, 1], FS U is the upper limit of squirrel search, and FS L is the lower limit.

Location update formula
In the iterative process, the position updated formula is divided into three types, in the same way as Eqs.( 2)-(4).One squirrel on an oak will move in the direction of the hickory nuts tree.Squirrels on ordinary trees are divided into two batches, one for the oak and the other for the hickory nuts tree.In order to improve the global search ability, Mohit added the condition of a hunter appearing [42].Once this condition is met, a squirrel will encounter a hunter and will be forced to abandon the current route, and the squirrel will randomly locate to a safe location in the search space.
Case1.Squirrel on the oak tree heading towards the hickory nuts tree.
Case2.Squirrel on the normal tree heading towards the oak tree.
Case3.Squirrel on the normal tree heading towards the hickory nuts tree.
In Eqs. ( 2)-( 4), P dp refers to the predator presence probability.R 1 , R 2 , and R 3 are all random numbers within [0, 1].G c is a gliding constant in a mathematical model, whose value is calculated as 1.9 after rigorous analysis [42].d g is the gliding distance of a squirrel that jump between trees.To prevent excessive disturbances from being introduced into the equation, the normal gliding distance is scaled to obtain a perturbation range of d g across [0.5, 1.11].During each iteration, d g is a sliding change [42], which greatly improves the local search ability of the algorithm.FS t ht , FS t at , FS t nt are the positions where a flying squirrel reaches the hickory nuts tree, an oak tree and a normal tree respectively, with t being the number of iterations.

Seasonal constant
Since squirrels' mobility change with the seasons, seasonal monitoring conditions are proposed in the algorithm [42].These conditions can abandon local optimal solutions in algorithms to a certain extent.First, we have to calculate the seasonal constant according to Eq. ( 5).
Next, check the seasonal monitoring condition S t c ≥ S min , The calculation of S min is as: When seasonal detection conditions are satisfied, the population of the squirrel will be relocated, and the formula is as follows: In Eq. ( 7), Levy(n) can be calculated by the following formula: where r a , r b are two normally distributed random numbers in [0, 1], and β is a constant 1.5 [42].

Improvement of squirrel search algorithm
The SSA is improved from two perspectives.The first one is improving the initial position by using the good point set method.The other is increasing the impact of the optimal position on the population as a whole, rather than just using the global optimal position-guided evolutionary algorithm as the the original does.

Good-point set
In most optimization algorithms, the initial population approach is based on a uniform random distribution.It is known that results of each iteration of the optimization algorithm have certain relationship with the previous generation of population.Many researchers use different methods of initial population location to improve the algorithm.The improved algorithm has more accurate calculation results than the original one [44,45].And the distribution of the initial population thus impacts on the final result and the number of iterations to some extent [46].Improving the distribution of initial population can make the algorithm perform better in convergence speed.However, uniformly randomly distributed initial population locations cannot evenly cover the entire solution space.In contrast, good-point sets method can fulfill this requirement.
Assuming that the initial population is n and each population has s dimensions, we can use a good-point set to initialize the initial population location of the SSA.Select a good point set …n} to form the initial population space, There are three main methods to generate this set [47].
(1) Square root sequence method: is the smallest prime number meeting (p − 3)/2 ≥ s (3) Exponential sequence method: r k {e k , 1 ≤ k ≤ s} Figure 1 respectively shows the random conditions and the set of points generated by the method of dividing the circle.It is clear to see the point set spatial distribution generated by the good-point set is more uniform.In comparison with the good-point set, initial positions resulting from the uniform random distribution are clumping and overlapping.Furthermore, if the popsize n is constant, the distribution of the good-point set is more stable.Hence we conclude that a uniform distribution of initial positions can be generated by applying a good-point set method.

Updated rule based on the global optimal position
(1) Chaotic perturbation of the optimal point As a new optimization method, chaos has been verified in many algorithm improvements [48][49][50][51].Combining chaos with PSO, Liu et al. [51] proposed chaotic particle swarm optimization (CPSO) algorithm, which improved the accuracy of calculation results.Wang et al. [50] introduced the idea of chaos into Krill Herd algorithm (KH), and accelerated the convergence rate of KH.According to the result of test function, Chaotic KH algorithm is improved compared with the original algorithm.We also introduce chaotic thought into the SSA, and perform chaotic perturbation on the optimal point after each update.In the original algorithm, the global optimal point does not change itself during each iteration, but only serves as the evolution direction of other points.In the combination one, the global best solution is used to generate a new position by chaotic mapping in each iteration.This new position and the global optimal position are selected by the method of survival of the fittest.The optimal solution gains local search ability after the method is improved, which helps the solution steer clear of the local optimal one.The chaotic map used in chaos optimization is usually logistic mapping, with equation shown below.
where μ is the system constant.When μ = 4, the system is in a chaotic state.For k ∈ [0,1], k is generated by Eq. ( 12) in this paper, FS A is the current best position, FS new A is the new point produced by the chaotic map of the global best point.L max , L min is the upper and lower limits of the search range.
(2) Mechanism of elimination In the SSA, the population is sorted by fitness after each iteration.The last few individuals in the ranking have deviated from the actual goal.The position update of the individuals with lower fitness ranking is slow, which may cause the convergence speed of the algorithm to slow down.In order to reduce the negative influence of poor individuals on the whole population, the elimination mechanism is proposed in the algorithm improvement.Reverse learning is proposed by Tizhooshti [52].In the process of optimization, the current candidate solution and its reverse individual are considered simultaneously to avoid premature convergence of the algorithm.At present, this method has been applied in Artificial Bee Colony Algorithm (ABC) and PSO, and achieved good results [53,54].
In the improved squirrel algorithm, the last few individuals in the rank are eliminated unconditionally after each position updates.It also uses the reverse learning based on global best guidance to generate new 10 individuals to fill the gaps, and then step into the next iteration.There is no condition for the replacement in this process.The calculation is shown below: where F new , F old refer to the newly created individual and the eliminated individual respectively.This method increases the diversity of the population and the impact of the global optimal position on the new population.This can improve the population diversity and global exploration ability of the algorithm.

Time complexity
The time complexity of ISSA depends on the maximum number of iterations, problem size, and population size.The calculation process is given as Eq. ( 15) where T max and N respectively represent the maximum number of iterations and the population size.2×O(N ) refers to the time complexity generated by population location update and sorting during each iteration, which is associated with the population size.O(f) refers to the time complexity generated by fitness calculation, which is related to the complexity of the problem.It can be observed in the calculation formula that the time complexity of ISSA is mainly depends on the scale of problem when solving complex optimization problems.
Being based on the aforementioned improvements, we construct pseudo code of the improved squirrel algorithm.

ISSA-SVM
SVM is a machine learning algorithm based on statistical learning theory, with minimum structure risk principle.It has strong generalization ability, a solid mathematical theory behind its model and can handle small samples easily.The standard SVM uses linear regression to fit the training set data, assume that the training set is S = {(X i , Y i )| i = 1, 2.…n}, X i is the input, Y i is the output, and n is the number of samples.The regression function is: Figure 1: The initial population of two methods in two-dimensional space.
where ω is weight vector, b is bias.ω and b are obtained by solving the optimal problem: And the above formula needs to satisfy the following conditions: where C is the penalty factor, ς i , ς * i is the relaxation factor, and ε is the insensitivity factor.In order to solve the nonlinear problem, kernel function is introduced into the model.Radial basis kernel function (RBF) is usually used in SVM, RBF is as shown: The optimal problem is transformed into quadratic programming problem by Lagrange equation, the formula is converted to: Finally, the SVM regression equation based on kernel function is: Penalty factor (c) and kernel function parameters (g) are two important parameters in SVM.The selection of these two parameters has a great influence on classification results.In this paper, the ISSA is combined with SVM, and is used to optimize c and g to improve the classification accuracy.The BCI2003 competetion's EEG data set was used to test the model, which is shown in Figure 2.

Simulation experiment
In this section, two simulation experiments are introduced.First, several test functions were used to test the performance of the improved squirrel algorithm.In the second simulation experiment, the ISSA-SVM model was used to classify EEG signals.In the comparison of optimization algorithms, we choose the standard PSO [55], Grasshopper optimisation algorithm (GOA) [56], Ions motion algorithm (IMO) [57] and SSA.In order to ensure the fairness of the comparison, we calculate the results under the same circumstance: The computer is configured as a Windows 7 system, the CPU is Inter (R) Xeon (R), and the memory is 32 GB.The common parameter of the optimization function is the maximum number of evaluations of 1000 × dim, where the population size is 50.Each optimization function runs 30 times independently.

Improved squirrel search algorithm (ISSA)
Input: Iterator times: t max ; Population:popsize; Dimension:dim; Output: best solution .Setting parameters, and Initial population location using a good point set approach..Calculate the fitness of the population, and rank the initial population in descending order according to fitness..According to the sorting, the population is divided into three groups, the first one is the best solution, and the second to fourth is considered to be the suboptimal solution, the rest is considered a general solution..Update the squirrel position using Eqs.()-()..Use Eqs.() and () to generate a new position for the best point chaotic map and make an elite selection..Repeat steps , . .If Meet seasonal conditions.
Update the squirrel position using Eq.(); end .Eliminate the ranking of dozens of squirrel individuals and use Eq.() to generate new individuals..Update the seasonal constant using Eq.()..Determine whether the number of iterations is met.If the condition is met, end the iteration and output the current best position..End.

Performance test of improved squirrel search algorithm
Four single-peaks, four multi-peak and four complex multi-peak test functions in fixed dimensions [42,56,58] are used to evaluate the algorithm performance.These functions are shown in Table 1.We used four indicatorsthe best value, the worst value, the average value and the variance in results statistics, with these results being demonstrated in Tables 2-4.
From Tables 2-3 and Figures 3 and 4, it demonstrates that the improved algorithm has, in general, lower values of those four indicators in the multi-peak test function and single-peak test function.From the point of view of convergence speed and accuracy, it is superior to the other four algorithms.
In the multi-peak complex test function of fixed dimensions, the performance of the improved algorithm does not perform as well as it does in the single-peak test function and multi-peak test function.In TF10, all the algorithms converge to the optimal value, but as can be seen from Figure 5B, the proposed algorithm has the high convergence speed.In TF11, though the improvement proposed is better than the original algorithm, the performance of PSO is the best in this test function.In general, the improved algorithm has certain competitiveness in complex test functions.EEG signals into images using continuous wavelet transform (CWT).These images are classified and identified as inputs to the CNN.The idea of this method is very interesting and worthy of reference.These papers preprocessed the signals by using the current mainstream signal processing methods, which improved the classification accuracy of the signals.

The dataset and preprocessing
(1) Description of dataset The data set records a 25-year-old healthy female subject who sat casually on a chair and controlled a feedback bar by imagining the left and right hand movements.(2) Preprocessing of the dataset When a certain area of the cerebral cortex is activated, the electrophysiological phenomenon of signal energy degradation in some frequency bands is called eventrelated-desynchronization (ERD).When a certain area of the brain is in a quiet state, the energy of certain frequency bands rises, which is known as event-related synchronization (ERS) [63].Once people imagine the movement of the hand, the μ rhythm and β wave of the C3/C4 channel in the sensory motion zone of the human brain have obvious ERD/ERS phenomenon [64].The μ rhythm and β wave mainly concentrated within 8-30 Hz, hence we have to filter the data first.We use an elliptical filter.The passband frequency of the elliptical filter matches these waves' energy band, with its passband ripples being less than 1 db and signal attenuation in the range of 5 Hz on both sides of the passband being 40 db.
After filtering the signal data, we need to segment the data [65].The experimental time was in total 9 s, but the ERD/ERS phenomenon did not persist throughout the experiment.If we calculate the energy characteristics of complete length data, the characteristics of time segments unrelated to events will inevitably affect the classification results.Therefore, we use a time window to divide data into different periods, and then calculate the energy difference of the same time window when the subject imagines hands movements.The formula are Eqs.( 23) and (24).In Eq. ( 23), k represents the kth time window and the value of 128 is the signal sampling frequency.
Since the motion starts after the third second, we calculate the energy difference between the C3 and C4 channels after this point.As shown in Figure 7, the energy difference of the channel starts to rise after the third second, peaking at the seventh second, and then begins to fall.This means the ERD/ ERS phenomenon appears after the third second and begins to decline after the seventh second respectively.This leads to treating period 4-7 s as the feature time period.
(3) Feature extraction and classification Time domain features have always been effective indicators of EEG signal classification [66].After the data is preprocessed by time domain analysis, two time domain features are extracted as eigenvalues, which are integral EEG (IEEG) and root mean square (RMS) values with formulas shown below:

RMS
The EEG signals are collected from the three channels of C3, C4, and Cz.After feature extraction, each single sample is integrated into a six-dimensional vector as a feature vector.The improved squirrel algorithm is used to optimize the SVM parameters and improve the classification accuracy.In this paper, SVM uses a RBF.In order to eliminate the exceptions of the results, we repeat the program 30 times to compare the final classification accuracy in different optimization algorithms.further be seen from Figure 9. Except for these two models, the results calculated by the other three methods in the train set and the test set are relatively close.The ISSA-SVM proposed in this paper, however, has comparative advantages in both the training set and the test set.From the F1-score, the method in this paper scores 85.1%, which is the highest score compared with the other four methods.For this data set, Brodu et al. [67] obtained classification accuracy of 82.1% for this data set in the study of power extraction technology.In [68], they used multifractal cumulations and predictive complexity to analyze the EEG and obtained an accuracy of 80.7% on the dataset.The average results of this paper show 4-5% improvement over other articles that use the same data set.It's worth noting that in [60], Lee et al. converted the CWT to convert EEG into 93 × 32 feature map, and then combined with the advantages of CNN in image processing to obtain a high classification accuracy (92.9%).The method in [60] is enlightening, but the time complexity of the model is higher.From the point of view of characteristic engineering, this paper extracts time domain feature of EEG signals, whose time complexity is O(1).The time complexity of CWT is usually O(slogs), where s represents the signal length.Analyze from the classification model, the time complexity of CNN is shown in Eq. (27).

Analysis of classification results of EEG signals
where D and l represent the total number of convolutional layers and the convolutional layer index respectively.M and K represent the length of the side of feature map and convolution kernel.Eq. ( 27) only includes the computational complexity of convolutional layer, excluding pooling layer and full connection layer [69].In addition, Eq. ( 27) represents the time complexity of a single sample in CNN, and the time complexity of the training stage is three times that of the testing stage [69].As shown in Figure 9, the classification accuracy of ISSA-SVM in training set and test set reached the highest of 100 and 89.5% respectively.The time complexity of ISSA-SVM is mainly determined by the complexity of SVM, which is O(P × dim).P is the number of samples and dim is the number of features [70].When the sample size is small, the time complexity of the classification model proposed in this paper is lower.To summarize, we have demonstrated that the proposed method is feasible in EEG signal pattern recognition.

Conclusion
In this paper, the SSA is improved.The distribution of initial population was improved by using the method of good-point set.The idea of chaos and the mechanism of reverse learning are introduced into the SSA, which improves the exploration ability and convergence speed of the algorithm.ISSA was tested on 12 commonly used benchmark functions.As can be seen from the tables and figures of result statistics, ISSA has better performance than some commonly used optimization algorithms.We use the ISSA to optimize important parameters of SVM and construct an ISSA-SVM model to identify and classify EEG signals.Finally, the classification accuracy of this method is 85.9% and F1-score is 85.1%.By comparing the results of combining SVM with other optimization algorithms, the method has better performance on F1-score and accuracy.Compared with the results of other references using the same data set, the classification accuracy obtained in this paper is also very competitive.In spite of this, the SSAISSA is not tested on some complex test functions.In addition, ISSA-SVM is mainly used for the binary classification of EEG signals.We hope to use this model for multi-classification recognition of signals in the future.The research shows that the signal pretreatment process has obvious influence on the classification results.Because this paper mainly focuses on the performance of the classified model, the method of signal processing is weak.We will pay more attention to feature extraction and feature selection in the following research.At the same time, the model proposed to this paper is combined to further to improve the accuracy of classification.
EEG data set can be divided into training set and test set, the training set accounts for 70% and the other accounts for
combined the time-domain feature extracted from EEG signals with eye movement signals to identify and classify people's sleep state, and finally applied this study result to treat sleep disorder.Because EEG signals are non-stationary signals, researchers have started to propose nonlinear features from EEG signals as feature vectors for classification in recent years.Such as various entropy values of the signal

Table  :
Testing function.

Table  :
Single peak test function results.Grasshopper optimisation algorithm; PSO, Particle swarm optimization; IMO, Ions motion algorithm; SSA, Squirrel search algorithm.The italics indicated that the value is optimal in the same row.

Table  :
Multi-peak test function results.Grasshopper optimisation algorithm; PSO, Particle swarm optimization; IMO, Ions motion algorithm; SSA, Squirrel search algorithm.The italics indicated that the value is optimal in the same row.

Table  :
Results of complex test functions in fixed dimensions.Grasshopper optimisation algorithm; PSO, Particle swarm optimization; IMO, Ions motion algorithm; SSA, Squirrel search algorithm.The italics indicated that the value is optimal in the same row.

Table  :
Test set evaluation index.GOA-SVM PSO-SVM IMO-SVM SSA-SVM This paper Grasshopper optimisation algorithm-support vector machine; PSO-SVM, Particle swarm optimization-support vector machine; IMO-SVM, Ions motion algorithm-support vector machine; SSA-SVM, Squirrel search algorithm-support vector machine.
(A) (B) Figure 9: Statistical results of classification results for test sets and training sets.