Performance Modeling of Load Balancing Techniques in Cloud: Some of the Recent Competitive Swarm Artificial Intelligence-based

: Cloud computing deals with voluminous heterogeneous data, and there is a need to effectively distribute the load across clusters of nodes to achieve optimal performance in terms of resource usage, through-put, response time, reliability, fault tolerance, and so on. The swarm intelligence methodologies use artificial intelligence to solve computationally challenging problems like load balancing, scheduling, and resource allocation at finite time intervals. In literature, sufficient works are being carried out to address load balancing problem in the cloud using traditional swarm intelligence techniques like ant colony optimization, particle swarm optimization, cuckoo search, bat optimization, and so on. But the traditional swarm intelligence techniques have issues with respect to convergence rate, arriving at the global optimum solution, complexity in implementation and scalability, which limits the applicability of such techniques in cloud domain. In this paper, we look into performance modeling aspects of some of the recent competitive swarm artificial intelligence based techniques like the whale, spider, dragonfly, and raven which are used for load balancing in the cloud. The results and analysis are presented over performance metrics such as total execution time, response time, resource utilization rate, and throughput achieved, and it is found that the performance of the raven roosting algorithm is high compared to other techniques.


Introduction
Cloud computing is a distributed computing paradigm used to store huge amount of data to provide software, platform, or infrastructure services on demand to the users with features like high tolerance, scalability, availability, robustness and so on [1]. As it deals with huge amount of data there is a need to effectively distribute the load among the nodes in the cloud by making sure that situation of the node being overloaded or underloaded does not occur. Effective load balancing increases the overall performance of the cloud applications in terms of efficient resource utilization, lowered latency, bottleneck avoidance, and increased job completion rate. However, there are several issues related to load balancing in cloud, like varying Quality of Service (QoS) requirement of the users, sudden failure of the resources, traffic variation over different geographical areas, communication overhead, frequent migration of traffic, wastage of the resources, inconsistent service abstractions, and so on [2][3][4][5][6].
The swarm artificial intelligence based techniques use collective intelligence among many entities to solve the Nondeterministic Polynomial time (NP) problem by properly handling the imprecise, inefficient, and uncertain input data. There are few swarm intelligence survey papers available in literature which provides theoretical description followed by algorithms of swarm intelligence techniques like bat algorithm, ant colony optimization, particle swarm optimization, and so on. Most of these algorithms are old and they are discussed from biological optimization point of view, they lack application and performance point of view description [7][8][9]. Swarm artificial intelligence techniques which are developed recently includes ageist spider monkey, shark smell optimization, whale optimization, lion optimization, spider, antlion optimization, jellyfish food search, eagle search, elephant, raven roosting, dragonfly, crow search and others. All these techniques are not suitable for cloud domain, this paper focuses on performance modeling of some of the efficient techniques used for cloud load balancing which includes whale optimization, spider, dragonfly, and raven roosting [10][11][12][13][14][15]. The performance modeling results demonstrate that among the considered techniques the performance of the raven roosting is high compared to the other techniques.
The rest of the paper is structured as follows, Section 2 describes the system model, Section 3 gives mathematical definition of the performance metrics used to evaluate the swarm intelligence based load balancing techniques, Section 4 provides performance modeling of whale optimization load balancing technique, Section 5 provides performance modeling of spider load balancing technique, Section 6 provides performance modeling of dragonfly load balancing technique, Section 7 provides performance modeling of raven roosting load balancing technique, Section 8 deals with results and discussion, and finally, Section 9 draws the conclusion.

System Model
Consider a cloud computing environment CCE comprising of several partitions of data center P ′ K s monitored by a main controller MC, i.e., CCE = {MC, {p 1 , p 2 , p 3 . . . , p k } ∈ P}, P ̸ = ∅ is a universal partition set comprising of several partitions. Every p k is a collection of several virtual machines monitored by a partition controller PC, p k = {PC, {vm 1 , vm 2 , . . . , vmm} ∈ VM} , where VM ̸ = ∅ is a universal set comprising of m virtual machines. The possible state of p k is described as a virtual machine vector vm pk ks , where vm pk ks denotes a virtual machine with k job set running s type of swarm intelligence based load balancing technique on p k .
At time ∆T, the MC receives load status message ls i ∈ LS from PC of every p k i.e., p k (LS, ∆T) = {PCid, LS = ∑︀ ls i }, where PC id is the unique partition controller identifier, ls i = {i, n, o} is the status of load on vm i in p k i.e., idle, normal, or overloaded, and LS ̸ = ∅ is a universal load status set of the partition. Let j k be the incoming job requests set at MC, the j k will be sent to normal or idle partitions, during which the PC runs Load Balancing Technique (LBT) with set of functional modules fm ′ i s to evenly distribute the load among the virtual machines in p k . A high-level view of the system model considered for load balancing in a typical cloud environment is shown in Figure 1.

Definition
This section gives the mathematical definition of the performance metrics used in the paper to evaluate the efficiency of the selected load balancing techniques.

Total execution time
The total execution time TE of j k on p k with VM is defined as the total time taken to execute every functional module fm of the LBT.

Response time
The response time RT of j k on p k with VM is the total time elapsed between the execution of first functional module fm f and last functional module fm l of the LBT.

Resource utilization rate
The resource utilization rate RU of j k on p k with VM is the measure of number of virtual machine resources that are idle N idle ri and overutilized N over ri by the j k with respect to total number of resources N ri in p k .

Throughput
The throughput TH of j k on p k with VM is the rate at which jq among the allocated j k have completed successfully by the LBT.

Whale optimization LBT
We consider some of the methods dealing with load distribution among the nodes in the cloud using the foraging behavior of humpback whales as the reference for performance modeling. The methods considered are Whale optimization based Scheduler (W-Scheduler), Multi-objective Whale Optimization Algorithm (WOA) for task scheduling and load balancing, Levy flight trajectory-based Whale Optimization Algorithm (LWOA), and improved levy based whale optimization algorithm for virtual machine placement [16][17][18][19][20][21]. The technique uses bubble net hunting strategy of whales to distribute the load evenly among the virtual machines as shown in Figure 2. The spiral simulated hunting behavior of whales with the best search policy to chase the prey is used to select optimal virtual machines which are capable enough to execute the allocated jobs.

Performance Modeling
The virtual machines in p k are considered as whales and the jobs as preys. The search agents population set SA is initialized to null and the fitness of every search agent in the search agent population set i.e., sa i ∈ SA is computed then the best fit search agent is selected randomly. Encircling of the prey happens by considering the current solution as best solution, because optimal solution in the search space is not known earlier and while encircling the search agent hypercube movements are performed around the neighborhood of the prey. After encircling operation, exploitation phase is initiated which tries to mimic the strategy used by the humpback whale to develop bubble net around the prey using two operations i.e., shrinking of the encircling operation and spiral updating operation. The shrinking operation determines the new position of the search agent in-between the original position of the search agent and the position of the current best agent, the spiral updating operation forms a spiral between the whale and the prey to mimic the helix movement of the humpback whale. At last in the exploration phase, the position of the search agent is updated by the random agent as the humpback whale searches its prey randomly by knowing the position of other whales to arrive at the global optimal solution.

Total execution time
The TE(j k , p k , VM) is the total time taken to, initialize the search agent t I (j k , p k , vm i ), compute the fitness of search agent t F (j k , p k , vm i ), perform encircling operation t En (j k , p k , vm i ), perform exploitation operation t Expi (j k , p k , vm i ) and, perform exploration operation t Expo (j k , p k , vm i ).
The t F (j k , p k , vm i ) is dependent on the makespan and budget function of the incoming jobs.
Where, M (j k ) is the makespan function which need to be lower than the deadline of jobs, M (j k ) ≤ ∑︀ k D j k , and B (j k ) is the budget function which need to be lower than the user budget cost of jobs, B (j k ) <= ∑︀ UBC j k . The t En (j k , p k , vm i ) involves finding the position of current best solution and updating the position of search agent to new position.
Where, the current position P = |Q * R * − R|, updated position R = |R * − M.P|, R * is the optimal position of the search agent, and M, Q are the coefficient vectors. The t Expi (j k , p k , vm i ) involves shrinking the encircle by updating the spiral formed.
The t Es (j k , p k , vm i ) is time taken to shrink the encircle by decreasing the coefficient vectors i.e., t Es (j k , p k , ]︁ and the spiral is updated as follows t Su (j k , p k , vm i )] = P ′ * cos (2 * ϕ * α)+R * , where ϕ, α are empirical constants, and P ′ = |R * −R|. The position of the search agent is updated either by encircling operation or spiral updating operation with probability p i.e., The t Expo (j k , p k , vm i ) involves updating the position of the search agent by randomly chosen agent.
Where, R rand represent random position vector and R = |R rand − M * P|. The total execution time is influenced by t Expo (j k , p k , vm i ), the search agents of whale optimization algorithm extensively search promising solutions, this may increase its convergence rate and there is a need to adaptively vary the search vector so that the smooth transit can be achieved between exploration and exploitation in optimization. However the algorithm has only two internal parameters i.e., M and Q, tuning of these parameters become easy while arriving at optimal solutions. As a result, the overall execution time is not so high or low but keeps fluctuating always.

Response time
The RT (j k , p k , VM) is the sum of the time difference between t I (j k , p k , vm i ) and t Expo (j k , p k , vm i ).
The algorithm uses bubble net attacking operation of humpback whale to simulate the search for best match between jobs and virtual machines as it defines the search space in the neighborhood of best matches and the bubble net employs adaptive search strategy to update the search vector by dedicating some iterations to exploration (|Q| ≥ 1) and remaining to exploitation (|Q| < 1). But the search agents abruptly changes their search policy during initial stages of the optimization, this results in gradual converge rate, as a result, the response time of the jobs increases with the increase in the number of virtual machines.

Resource utilization rate
The RU (j k , p k , VM) is proportional to the augmented value of RT (j k , p k , VM) and constant Φ.
The jobs are allocated among virtual machines by mimicking the hypercube and helix movement of the humpback whale, these movements make the optimization technique as a global optimizer and it does an exhaustive search on finding best fit virtual machines by considering the QoS requirements of the jobs. The periodic evaluation of resource utilization status among the virtual machines revealed that while achieving higher accuracy in mapping there are chances of transformation from exploration to exploitation leading to inefficient utilization of resources in p k .

Throughput
The TH (j k , p k , VM) is proportional to the rate of successful execution of jobs among the virtual machines in p k , which is influenced by t Expi (j k , p k , vm i ) and t Expo (j k , p k , vm i ).
The whale optimization algorithm often fails during the initial iteration of the optimization while finding the best match between the jobs and virtual machines and the tendency to converge to the local optimal solution is also very high. But over iterations the technique achieves balance between exploration and exploitation phases, this reduces the chances of jobs suffering from the break down while execution but increases the migration rate of jobs, as a result, the rate of successful completion of jobs becomes static over iterations of optimization.

Spider LBT
The methods using foraging behavior of spiders in their social colony to balance the load across the nodes in the cloud is being considered as the reference for performance modeling. The methods considered are Social Spider Cloud Web algorithm (SSCWA), Chaotic social spider load balancing algorithm, Load Balanced Task Allocation based on Social Spider Optimization (LBTA_SSO), and spider mesh overlay [22][23][24][25][26][27][28]. The spiders interact with each other through vibrations and they also vary their intensity of the vibrations with respect to distance, this feature of the spider helps in properly identifying the position of the resources in the cloud and reduces the situation of load imbalance due to premature convergence to locally optimal solutions. The social interaction among the spiders within the partitions web and across the partitions web to balance the load among the widely distributed virtual machines is shown in Figure 3.

Performance Modeling
The jobs are represented as spiders and virtual machines are treated as preys or food sources. The population of jobs in j k are initialized with the spider parameters like position, fitness, and vibration intensity j k =< p, f , vi >. The virtual machines in p k act as resources of the cloud and its fitness is measured in terms of its utilization capacity f (vm i ) =< uc >. The jobs with high QoS requirement tends to vibrate more in the colony of the web, and to prevent confusion among the jobs in acquiring the same set of virtual machines, the intensity attenuation rate is also varied over a distance. The process of finding the appropriate virtual machine to job set is carried out until a stopping condition is reached and the memory of the job is updated with the best matching virtual machine addresses.

Total execution time
The TE (j k , p k , VM) is the total time taken to, initialize jobs and virtual machines with spider and prey parameters t I (j k , p k , vm i ), generate vibrations t V (j k , p k , vm i ), vary the intensity of the vibration t IV (j k , p k , vm i ), and allocate the jobs to virtual machines with matching vibrations t A (j k , p k , vm i ).
The t V (j k , p k , vm i ) is dependent on the position of the virtual machine pos i , maximum resource utilization ratio of the virtual machine RUmax, and minimum resource utilization ratio of the virtual machine Where f (pos i ) represent the fitness of the virtual machine at pos i .
Where D is the distance between virtual machine at position i pos i and job at position jpos j , D(pos i , and UP is the parameter under reign of user. Where, best (vm i ) is the virtual machine with best vibration intensity.
The main objective of the spider algorithm is to keep the vibration intensity value positive. After attaining the positive intensity value, a random walk of spider is carried out to locate the best matching virtual machine for the jobs. As a result, the execution time does not float much and remains constant over time.

Response time
The RT (j k , p k , VM) is the sum of the time elapsed between the t I (j k , p k , vm i ) and t A (j k , p k , vm i ).
The spider algorithm uses vibration intensity based search mechanism to map j k to appropriate vm i ∈ VM but the vibration intensity exhibited by the jobs drops exponentially and becomes more restrictive when the jobs arrival rate increases, as a result, the response time goes high.

Resource utilization rate
The RU (j k , p k , VM) is inversly proportional to augmented value of RT (j k , p k , VM) and constant Φ.
The RU (j k , p k , VM) is influenced by the accuracy of the mapping of j k to appropriate vm i ∈ VM in p k . The accuracy of mapping is low because the attenuation of the vibration is varied with respect to distance and random walk mechanism is followed to locate best matching virtual machine which leads to premature convergence whenever the virtual machines try to exhibit unsettled behavior, as a result, the resources remains in the substantial stage for a long time.

Throughput
The TH (j k , p k , VM) is dependent on the vibration intensity of the spider parameter of jobs and prey parameter of the virtual machines.
The spider algorithm automatically changes the vibration attenuation rate and many QoS parameters are taken into consideration during vibration. This feature exploits the global optimal match between j k and vm i ∈ VM but if the vibration rate of several job set remains same for long time, there are chances of collision between the job set for similar type of virtual machines. As a result, the rate of successful completion of the jobs decreases with the increase in the similar type of virtual machines.

Dragonfly LBT
The methods using static and dynamic swarming behaviors of dragonflies for load balancing in the cloud is being considered as the reference for performance modeling. The methods considered are Deadline Aware Multi-Objective Dragonfly Optimization (DAMO), Dragonfly Optimization Algorithm (DOA), dragonfly algorithm with dragonfly parameters, and constraint measure based dragonfly optimization [29][30][31][32][33]. The interaction among the dragonflies while navigating, their food search procedure and even the steps followed by them to avoid enemies while swarming is used to provide best load balancing solution in crucial situations especially when the ratio of incoming jobs and available resources in the nodes are inappropriate. The dragonfly swarming behavior exhibited by jobs within partitions by forming the swarm of jobs of varying size to move towards best fit virtual machines is shown in Figure 4.

Performance Modeling
The jobs are represented as dragonflies and virtual machines are treated as food sources. The population of jobs in j k are initialized with the dragonfly factors like separation, alignment, cohesion, attraction, and distraction j k =< s, al, c, at, d >. Separation is the parting of one dragonfly from other dragonflies to avoid static collision in the neighborhood during food search, alignment is the matching of the one dragonfly velocity towards other dragonflies in the neighborhood, cohesion is the urge towards the center of the neighborhood, attraction is the interest towards the food source, and distraction is the disturbance caused by the movement of the enemies. The movement of the dragonflies is guided by step vector and position vector (∆S, ∆P), which updates the position of the dragonflies in the large search space and navigates their movement until the stopping criteria is fulfilled.

Total execution time
The TE (j k , p k , VM) is the total time taken to, initialize the dragonfly parameters of the job along with its step vector, and the position vector t I (j k , p k , VM), calculate separation factor of job from others jobs t S (j k , p k , VM), calculate alignment of job towards other jobs t Al (j k , p k , VM), calculate cohesion of jobs towards mass of neighborhood t C (j k , p k , VM), calculate the attraction towards the virtual machine t At (j k , p k , VM), calculate the distraction from the enemies t D (j k , p k , VM), and update the step vector and position vector t U (j k , p k , VM).
Where, N is the number of neighboring jobs.
Where, V is the velocity of the i th j k among the N j ′ k s considered.
Where P is the position of the job, P + is the position of the virtual machine, and P − is the position of the enemy.
The jobs with high alignment weight and low cohesion weight are handled during exploration stage and the jobs with low alignment weight and high cohesion weight are handled during exploitation stage, thereby the algorithm properly balances between exploration and exploitation this increases the rate of convergence, as a result, the execution time of the dragonfly algorithm is reduced.

Response time
The RT (j k , p k , VM) is the sum of the time elapsed between the t I (j k , p k , VM) and t U (j k , p k , VM).
The response time of the dragonfly algorithm is dependent on the adaptive tuning capability of the swarm factors s, al, c, at and d. As more factors need to be handled, the tuning time is high in the initial iteration, but by adaptively changing the weights of the factors the algorithm achieves smooth transition between exploration and exploitation and arrive at the globally optimal solution, as a result, the response time is found to be average.

Resource utilization
The RU (j k , p k , VM) is proportional to augmented value of RT (j k , p k , VM) and constant Φ.
In dragonfly algorithm, the radii of the neighborhood are increased with the increase in the number of optimization iterations, this increases the convergence towards promising search spaces and causes divergence outwards from not so promising search spaces, as a result, the resource utilization is high.

Throughput
The TH (j k , p k , VM) is dependent on the cohesion and alignment factors of the dragonfly algorithm.
The dragonfly algorithm automatically changes the attenuation rate of cohesion and alignment factors to increase their neighborhood area. They adjust their flying path to converge to the global optimum solution and uses levy flight mechanism to exhaustively search optimal solution when there are no neighboring solutions. The rate of successful completion of jobs is high when neighboring solutions do exist but in the absence of the neighboring solution, the successful completion of jobs is found to be average.

Raven Roosting LBT
Few methods dealing with load distribution strategies in the cloud environment using the foraging behavior and social roosting of raven birds are considered as the reference for performance modeling. The methods considered are Raven Roosting Optimization Algorithm (RROA), and Improved Raven Roosting Optimization Algorithm (IRROA) [34][35][36][37][38][39]. The roosting place of raven birds act as the information center to broadcast the data related to food sources in their environment, they usually check for the availability of sufficient food in their neighboring locations if it is found then they move towards that location in search of food else they seek for some other location. Likewise, every bird has its own private knowledge about the food sources based on their past experience, this act as deciding factor whether to move towards the food source or to find alternate food sources. These features of the raven roosting algorithm make it strong enough to handle the overload/under-load situations in larger computing domain like the cloud. The social roosting behavior of ravens to follow the leader or unfollow the leader to find the large quantity of food source is mimicked by the jobs to find the suitable virtual machines as shown in Figure 5.

Performance Modeling
The incoming jobs are considered as ravens and the virtual machines in p k are considered as food sources, at first, the ravens are distributed among the food sources randomly. Then the fitness of every raven location is computed and the personal best location of the raven is updated and is treated as the leader. A portion of ravens is recruited to follow the leader, which starts to forage by selecting a random point within the sphere of the leader and the unrequited portion of the ravens go to their personal best and start to forage there. The process of food search is continued by updating the step length of raven movement until the highest quality food location is found.

Total execution time
The TE (j k , p k , VM) is the total time taken to, assign jobs to random virtual machines t A (j k , p k , vm i ), select the leader among the jobs t L (j k , p k , vm i ), compute the step size for job movement t SS (j k , p k , vm i ), decide followers and unfollowers of the leader t F−UF (j k , p k , vm i ), and update the personal best of the job t Pbest (j k , p k , vm i ).
Where, pos m is the current position of the job, pos m−1 is the previous position of the job, and ssm is the randomly chosen step length of the job.
Where, neighborhood(JL) is the neighborhood of the leader, and pbest(JL) is the personal best of the leader. Raven roosting algorithm follows a simple perception mechanism where the ravens either follow the leader or goes with their personal best choice and stochastically stops on finding the best location; this increases the convergence rate of the algorithm and meanwhile reduces its total execution time.

Response time
The RT (j k , p k , VM) is the sum of the time difference between t A (j k , p k , vm i ) and t Pbest (j k , p k , vm i ).
The response time of the raven roosting algorithm is high during initial iterations of the algorithm as the ravens are randomly distributed among the food locations and even the step size is also randomly chosen to navigate the movement of ravens but over iterations due to the update of the memory of the ravens to the P best of the location of food source the response time gradually reduces.

Resource utilization
The RU (j k , p k , VM) is proportional to augmented value of RT (j k , p k , VM) and constant Φ.
In raven roosting algorithm, every individual raven maintains personal memory which remembers the location of the best food source rather than only trusting the location of the ravens leader, as a result, the accuracy of the mapping of the raven to appropriate food sources increases which in turn leads to efficient resource utilization.

Throughput
The TH (j k , p k , VM) depends on the successful execution of jobs, which is influenced by the t SS (j k , p k , vm i ) and t F−UF (j k , p k , vm i ).
The rate of completion of jobs is lower in initial iterations as the ravens suffer from neophobia (fear of new food locations), they exhibit some sort of reluctance towards new food locations even if the location has sufficient amount of resources. However, over iteration the rate of completion of jobs increases due to the capability of the ravens to memorize the successful/unsuccessful food foraging locations. The behavior of the swarm intelligence based techniques like a whale, spider, dragonfly, and raven roosting, towards the performance metrics like total execution time, response time, resource utilization rate, and throughput are summarized as follows. Among the four load balancing techniques considered, the whale and spider exhibits high similarity, as both of the approaches adopt the flexible searching strategy. The two techniques use either bubble-net hunting strategy or vary the vibration intensity of the spider to arrive at proper load balancing strategies with minimum search episodes. Whereas the behavior of dragonfly and raven roosting are unique compared to all four techniques considered for modeling in terms of search strategy optimization. The dragonfly vary the size of the swarms while navigating towards the solution and even make sure that it does not get distracted from enemies. This feature helps in arriving at the global optimal solution, even in the presence of several sub-optimal solutions for load balancing problem. The raven roosting does both global and local search, as every raven bird uses its private knowledge and knowledge of raven leader while taking decisions. The raven roosting and dragonfly outperforms all other as both of them utilize the resources efficiently by preventing the situation of overloading or under-loading of the resources using roosting behavior of ravens.

Results and Discussion
For simulation purpose, CloudAnalyst visual modeling tool based on CloudSim simulator is used, it consists of two major components one is the partition and the other is the job [40]. The partition constitutes set of virtual machines, memory, disk storage, and bandwidth. The job constitutes incoming requests per hour, the location of clients, and size of the requests. For experiment purpose, the number of partitions are varied from 10 to 100, the number of jobs for every partition is varied from 100000 to 2200000, virtual machines in every partition is varied from 10000 to 30000 and the jobs are also classified into five types with fixed size which includes computing jobs, memory jobs, CPU jobs, transmission jobs, and retrieval jobs. Here the performance achieved by the whale, spider, dragonfly, and raven are discussed with respect to performance metrics like total execution time, response time, resource utilization rate, and throughput in three different scenarios. In the first scenario, the number of incoming jobs to every partition is fixed by varying the number of partitions, in the second scenario one partition is considered by varying the jobs, and in the third scenario variety of jobs are considered towards every partition.
The fixed parameter scheme is used to select specific parameter values from predefined range of values for the swarm intelligence based load balancing techniques before the simulation is carried out based on the characteristics of the parameters and the same parameter range is maintained throughout the whole search space and the parameter values are chosen randomly from the mentioned range.

Scenario-1
In scenario 1, the number of incoming jobs towards every partition is fixed to 2000000 and the data center partitions are varied from 1 to 100. The partitions are distributed randomly over the large geographical region and the settings related to the partitions are also fixed. The performance of the whale, spider, dragonfly, and raven techniques of load balancing under scenario 1 is depicted in Figure 6. With respect to total execution time, with the increase in partitions, the total execution time of dragonfly and raven is lower; the execution time of spider remains average, whereas the execution time of the whale is found to be higher. With respect to response time, the spider's response time is found to remain higher with the increase in partitions but the response time of whale, dragonfly, and raven remained in average range. With respect to resource utilization rate, dragonfly, spider, and raven remained above average, whereas there is a considerable drop in resource utilization rate of the whale. With respect to throughput, the raven achieved outstanding successful job completion rate, the job completion rate of the dragonfly is above average, whereas the job completion rate of whale and spider remain in an average range.

Scenario-2
In scenario 2, one partition is being considered and the incoming jobs are varied from 100000 to 2200000. The performance of the whale, spider, dragonfly, and raven techniques of load balancing under scenario 2 is depicted in Figure 7. With respect to total execution time; with the increase in jobs, the total execution time of whale is found to be high whereas the execution time of spider, dragonfly, and raven remains average. With respect to response time; the response time of whale and spider is found to be higher; the response time of raven is in the average range whereas the response time of dragonfly is lower especially when more jobs are being considered. With respect to resource utilization rate; the resource utilization rate of the raven and the spider is found to be higher than whale and dragonfly. With respect to throughput; the job completion rate of the dragonfly, and raven consistently kept higher with the increase in the jobs but an uneven pattern in job completion rate of spider and the whale is observed.

Scenario-3
In scenario 3, varieties of incoming jobs (computing jobs, memory jobs, CPU jobs, transmission jobs, and retrieval jobs) in every partition are being considered. The performance of the whale, spider, dragonfly, and raven techniques of load balancing under scenario 3 is depicted in Figure 8. With respect to computing jobs; the total execution time of dragonfly and spider is lower whereas the execution time of whale and raven is higher, the response time of whale is found to be lower, the response time of raven and dragonfly is average but the response time of the spider is found to be higher, the resource utilization rate of whale, spider and raven is higher compared to dragonfly, the rate of successful completion of jobs is found to be average for whale, spider, dragonfly, and raven. With respect to memory jobs; the total execution time of whale, spider, and dragonfly is lower compared to raven, the response time of whale and dragonfly is lower compared to spider and raven, the resource utilization rate of dragonfly is lower compared to whale, spider, and raven, the rate of completion of jobs is higher for whale, dragonfly, and raven compared to spider. With respect to CPU jobs; the total execution time of all algorithms i.e., whale, dragonfly, raven, and spider is found to be average, the response time of whale and dragonfly is lower compared to spider and raven, the resource utilization rate of all algorithms i.e., whale, dragonfly, raven, and spider is found to be average, the rate of successful completion of jobs is found to be higher for whale, dragonfly, and raven compared to spider. With respect to transmission jobs, the total execution time of dragonfly is lower compared to whale, spider, and raven, the response time of all algorithms i.e., raven, dragonfly, spider, and whale is found to be below average, the resource utilization rate of all algorithms i.e., raven, dragonfly, spider, and whale is found to be high, the rate of successful completion of jobs is higher for raven compared to whale, spider, and dragonfly. With respect to retrieval jobs, the total execution time of dragonfly and spider is lower compared to whale and raven, the response time of all algorithms i.e., whale, spider, dragonfly, and raven fall in average range, the resource utilization rate of spider and whale is higher compared to dragonfly and raven, the rate of successful completion of jobs is higher for raven compared to whale, spider, and dragonfly.