Learnheuristics: hybridizing metaheuristics with machine learning for optimization with dynamic inputs

Laura Calvet 1 , Jésica de Armas 1 , David Masip 1  and Angel A. Juan 1
  • 1 Dept. of Computer Science – IN3, Castelldefels, Spain

Abstract

This paper reviews the existing literature on the combination of metaheuristics with machine learning methods and then introduces the concept of learnheuristics, a novel type of hybrid algorithms. Learnheuristics can be used to solve combinatorial optimization problems with dynamic inputs (COPDIs). In these COPDIs, the problem inputs (elements either located in the objective function or in the constraints set) are not fixed in advance as usual. On the contrary, they might vary in a predictable (non-random) way as the solution is partially built according to some heuristic-based iterative process. For instance, a consumer’s willingness to spend on a specific product might change as the availability of this product decreases and its price rises. Thus, these inputs might take different values depending on the current solution configuration. These variations in the inputs might require from a coordination between the learning mechanism and the metaheuristic algorithm: at each iteration, the learning method updates the inputs model used by the metaheuristic.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    Talbi, E.G., Metaheuristics: from design to implementation, Wiley Publishing, 2009, ISBN 0470278587, 9780470278581.

  • [2]

    Caceres-Cruz, J., Arias, P., Guimarans, D., Riera, D., Juan, A.A., Rich vehicle routing problem: survey, ACM Comput Surv, 2014, 47(2),1–28.

  • [3]

    Talbi, E.G., Combining metaheuristics with mathematical programming, constraint programming and machine learning, 4OR, 2013, 11(2),101–150.

  • [4]

    Maniezzo, V., Stützle, T., Vo, S., Matheuristics: hybridizing metaheuristics and mathematical programming, Springer Publishing Company, Incorporated, 1st ed., 2009, ISBN 144191305X, 9781441913050.

  • [5]

    Juan, A.A., Faulin, J., Grasman, S.E., Rabe, M., Figueira, G., A review of simheuristics: extending metaheuristics to deal with stochastic combinatorial optimization problems, Operations Research Perspectives, 2015, 2,62–72.

  • [6]

    Calvet, L., Ferrer, A., Gomes, M.I., Juan, A.A., Masip, D., Combining statistical learning with metaheuristics for the multidepot vehicle routing problem with market segmentation, Comput Ind Eng, 2016, 94(C),93–104.

  • [7]

    Boussaïd, I., Lepagnot, J., Siarry, P., A survey on optimization metaheuristics, Inf Sci, 2013, 237,82–117.

  • [8]

    Dorigo, M., Optimization, learning and natural algorithms, Ph.D. thesis, Politecnico di Milano, Italy, 1992.

  • [9]

    Farmer, J.D., Packard, N.H., Perelson, A.S., The immune system, adaptation, and machine learning, Phys D,1986, 2(1-3),187–204.

  • [10]

    Holland, J.H., Outline for a logical theory of adaptive systems, Journal of the ACM, 1962, 3(9),297–314.

  • [11]

    Feo, T.A., Resende, M.G.C., A probabilistic heuristic for a computationally difficult set covering problem, Oper Res Lett, 1989, 8(2),67–71.

  • [12]

    Martin, O., Otto, S.W., Felten, E.W., Large-step Markov Chains for the TSP incorporating local search heuristics, Oper Res Lett, 1992, 11(4),219–224.

  • [13]

    Kennedy, J., Eberhart, R.C., Particle swarm optimization, In: Proceedings of the IEEE International Conference on Neural Networks. 1995, p.1942–1948.

  • [14]

    Glover, F., Heuristics for integer programming using surrogate constraints, Decision Sci, 1977, 8(1),156–166.

  • [15]

    Kirkpatrick, S., Optimization by simulated annealing: quantitative studies, Journal of statistical physics, 1984, 34(5-6),975–986.

  • [16]

    Glover, F., Future paths for integer programming and links to artificial intelligence, Comput Oper Res, 1986, 13(5),533–549.

  • [17]

    Mladenovic, N., A variable neighborhood algorithm: a new metaheuristic for combinatorial optimization, Abstracts of papers presented at Optimization Days, 1995, 112.

  • [18]

    Sörensen, K., Metaheuristics—the metaphor exposed, International Transactions in Operational Research, 2015, 22(1),3–18.

  • [19]

    Feo, T.A., Resende, M.G.C., Greedy randomized adaptive search procedures, J Global Optim, 1995, 6(2),109–133.

  • [20]

    Gendreau, M., Potvin, J.Y., Handbook of metaheuristics, Springer Publishing Company, Incorporated, 2nd ed.,2010, ISBN 1441916636, 9781441916631.

  • [21]

    Barber, D., Bayesian reasoning and machine learning, New York, NY, USA: Cambridge University Press, 2012, ISBN 0521518148, 9780521518147.

  • [22]

    Lantz, B., Machine learning with R, Packt Publishing, 2013, ISBN 1782162143, 9781782162148.

  • [23]

    Potvin, J.Y., Smith, K.A., Artificial neural networks for combinatorial optimization, Boston, MA: Springer US, 2003, p. 429–455.

  • [24]

    Smith, K.A., Neural networks for combinatorial optimization: a review of more than a decade of research, INFORMS J on Computing, 1999, 11(1),15–34.

  • [25]

    Jourdan, L., Dhaenens, C., Talbi, E.G., Using datamining techniques to help metaheuristics: a short survey, In: International Workshop on Hybrid Metaheuristics. Gran Canaria, Spain: Springer Berlin Heidelberg, 2006, p. 57–69.

  • [26]

    Zhang, J., Zhang, Z.h., Lin, Y., Chen, N., Gong, Y.j., Zhong, J.h., et al., Evolutionary computation meets machine learning: a survey, Comp Intell Mag, 2011, 6(4), 68–75.

  • [27]

    Corne, D., Dhaenens, C., Jourdan, L., Synergies between operations research and data mining: the emerging use of multiobjective approaches, Eur J Oper Res, 2012, 221(3),469–479.

  • [28]

    Freitas, A., A review of evolutionary algorithms for data mining, In: Soft computing for knowledge discovery and data mining. Springer, 2008, p. 79–111.

  • [29]

    Dhaenens, C., Jourdan, L., Metaheuristics for big data, Wiley,2016, ISBN 9781119347606.

  • [30]

    De Jong, K., Parameter setting in EAs: a 30 year perspective, In: Lobo, F., Lima, C., Michalewicz, Z., editors. Parameter setting in evolutionary algorithms. Springer, 2007, p.1–18.

  • [31]

    Jeong, S.J., Kim, K.S., Lee, Y.H., The efficient search method of simulated annealing using fuzzy logic controller, Expert Syst Appl, 2009, 36(3),7099–7103.

  • [32]

    Zennaki, M., Ech-Cherif, A., A new machine learning based approach for tuning metaheuristics for the solution of hard combinatorial optimization problems, Journal of Applied Sciences, 2010, 10(18),1991–2000.

  • [33]

    Lessmann, S., Caserta, M., Arango, I.M., Tuning metaheuristics: a data mining based approach for particle swarm optimization, Expert Syst Appl, 2011, 38(10), 12826–12838.

  • [34]

    Gunawan, A., Lau, H.C., Wong, E., Real-world parameter tuning using factorial design with parameter decomposition, New York, NY: Springer New York, ISBN 978-1-4614-6322-1, 2013, p. 37–59.

  • [35]

    Ramos, I.C.O., Goldbarg, M.C., Goldbarg, E.G., Neto, A.D.D., Logistic regression for parameter tuning on an evolutionary algorithm, In: IEEE Congress on Evolutionary Computation. Edinburgh, Scotland: IEEE, 2005, p.1061–1068.

  • [36]

    Bartz-Beielstein, T., Parsopoulos, K.E., Vrahatis, M.N., Design and analysis of optimization algorithms using computational statistics, Applied Numerical Analysis & Computational Mathematics, 2004, 1(2), 413–433.

  • [37]

    Pavón, R., Díaz, F., Laza, R., Luzón, M.V., Automatic parameter tuning with a Bayesian case-based reasoning system. A case of study, Expert Syst Appl, 2009, 36,3407–3420.

  • [38]

    Pereira, I., Madureira, A., de Moura Oliveira, P.B., Abraham, A., Tuning meta-heuristics using multi-agent learning in a scheduling system, In: Transactions on Computational Science XXI. Springer Berlin Heidelberg, 2013, p.190–210.

  • [39]

    Ries, J., Beullens, P., Salt, D., Instance-specific multi-objective parameter tuning based on fuzzy logic, Eur J Oper Res, 2012, 218(2),305–315.

  • [40]

    Caserta, M., Rico, E.Q., A cross entropy-Lagrangean hybrid algorithm for the multi-item capacitated lot-sizing problem with setup times, Computers & OR, 2009, 36(2),530–548.

  • [41]

    Dobslaw, F., A parameter tuning framework for metaheuristics based on design of experiments and artificial neural networks, World Academy of Science, Engineering and Technology, 2010, 64,213–216.

  • [42]

    Battiti, R., Brunato, M., Reactive search optimization: learning while optimizing, In: Gendreau, M., Potvin, J.Y., editors. Handbook of Metaheuristics. Springer, 2010, p. 543–571.

  • [43]

    Leung, Y.W., Wang, Y., An orthogonal genetic algorithm with quantization for global numerical optimization, Trans Evol Comp, 2001, 5(1), 41–53.

  • [44]

    Ramsey, C.L., Grefenstette, J.J., Case-based initialization of genetic algorithms, In: Proceedings of the 5th International Conference on Genetic Algorithms. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., ISBN 1-55860-299-2, 1993, p. 84–91.

  • [45]

    Louis, S.J., McDonnell, J., Learning with case-injected genetic algorithms, Trans Evol Comp, 2004, 8(4), 316–328.

  • [46]

    Li, Z.q., Zhang, H.l., Zheng, J.h., Dong, M.j., Xie, Y.f., Tian, Z.j., Heuristic evolutionary approach for weighted circles layout, Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, p. 324–331.

  • [47]

    Yalcinoz, T., Altun, H., Power economic dispatch using a hybrid genetic algorithm, IEEE Power Engineering Review, 2001, 21(3),59–60.

  • [48]

    De Lima, F.C., De Melo, J.D., Neto, A.D.D., Using the Q-learning algorithm in the constructive phase of the GRASP and reactive GRASP metaheuristics, ISBN 9781424418213, 2008, p. 4169–4176.

  • [49]

    Santos, L.F., Martins, S.L., Plastino, A., Applications of the DM-GRASP heuristic: a survey, International Transactions in Operational Research, 2008, 15(4),387–416.

  • [50]

    Lim, D., Jin, Y., Ong, Y.S., Sendhoff, B., Generalizing surrogate-assisted evolutionary computation, Trans Evol Comp, 2010, 14(3),329–355.

  • [51]

    Tenne, Y., Goh, C.K., Computational intelligence in expensive optimization problems, vol. 2, Springer Science & Business Media, 2010.

  • [52]

    Zhou, Z., Ong, Y.S., Nguyen, M.H., Lim, D., A study on polynomial regression and Gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm, In: IEEE Congress on Evolutionary Computation, vol. 3. IEEE, 2005, p. 2832–2839.

  • [53]

    Hunger, J., Huttner, G., Optimization and analysis of force field parameters by combination of genetic algorithms and neural networks, J Comput Chem, 1999, 20(4), 455–471.

  • [54]

    Adra, S.F., Hamody, A.I., Griffin, I., Fleming, P.J., A hybrid multi-objective evolutionary algorithm using an inverse neural network for aircraft control system design., In: Congress on Evolutionary Computation. IEEE, 2005, p.1–8.

  • [55]

    Pathak, B.K., Srivastava, S., Srivastava, K., Neural network embedded multiobjective genetic algorithm to solve non-linear time-cost tradeoff problems of project scheduling, Journal of scientific and industrial research, 2008, 67(2),124–131.

  • [56]

    Yang, S., Liu, Q.H., Lu, J., Ho, S.L., Ni, G., Ni, P., et al., Application of support vector machines to accelerate the solution speed of metaheuristic algorithms, IEEE T Magn, 2009, 45(3),1502–1505.

  • [57]

    Brownlee, A.E., Regnier-Coudert, O., McCall, J.A., Massie, S., Using a Markov network as a surrogate fitness function in a genetic algorithm, In: IEEE Congress on Evolutionary Computation. IEEE, 2010, p.1–8.

  • [58]

    Díaz-Manríquez, A., Toscano-Pulido, G., Gómez-Flores, W., On the selection of surrogate models in evolutionary optimization algorithms, In: 2011 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2011, p. 2155–2162.

  • [59]

    Regis, R.G., Evolutionary programming for high-dimensional constrained expensive black-box optimization using radial basis functions, IEEE Transactions on Evolutionary Computation, 2014, 18(3),326–347.

  • [60]

    Rasheed, K., Hirsh, H., Informed operators: speeding up genetic-algorithm-based design optimization using reduced models, In: Proceedings of the 2nd Annual Conference on Genetic and Evolutionary Computation. Morgan Kaufmann Publishers Inc., 2000, p. 628–635.

  • [61]

    Zhou, A., Zhang, Q., A surrogate-assisted evolutionary algorithm for minimax optimization, In: IEEE Congress on Evolutionary Computation. IEEE, 2010, p.1–7.

  • [62]

    Jin, Y., A comprehensive survey of fitness approximation in evolutionary computation, Soft Comput, 2005, 9(1),3–12.

  • [63]

    Yoo, S.H., Cho, S.B., Partially evaluated genetic algorithm based on fuzzy c-means algorithm, In: International Conference on Parallel Problem Solving from Nature. Springer, 2004, p. 440–449.

  • [64]

    Jin, Y., Sendhoff, B., Reducing fitness evaluations using clustering techniques and neural network ensembles, In: Genetic and evolutionary computation conference. Springer, 2004, p. 688–699.

  • [65]

    Dalboni, F.L., Ochi, L.S., Drummond, L.M.A., On improving evolutionary algorithms by using data mining for the oil collector vehicle routing problem, In: International Network Optimization Conference. 2003, p.182–188.

  • [66]

    Santos, L., Ribeiro, M.H., Plastino, A., Martins, S.L., A hybrid GRASP with data mining for the maximum diversity problem, In: International Workshop on Hybrid Metaheuristics. Springer, 2005, p.116–127.

  • [67]

    Ribeiro, M.H., Plastino, A., Martins, S.L., Hybridization of GRASP metaheuristic with data mining techniques, Journal of Mathematical Modelling and Algorithms, 2006, 5(1),23–41.

  • [68]

    Santos, H.G., Ochi, L.S., Marinho, E.H., Drummond, L.M.d.A., Combining an evolutionary algorithm with data mining to solve a single-vehicle routing problem, Neurocomputing, 2006, 70(1),70–77.

  • [69]

    Louis, S.J., Genetic learning from experience, In: IEEE Congress on Evolutionary Computation, vol.3. IEEE,2003, p. 2118–2125.

  • [70]

    Streichert, F., Stein, G., Ulmer, H., Zell, A., A clustering based niching method for evolutionary algorithms, In: Genetic and Evolutionary Computation Conference. Springer, 2003, p. 644–645.

  • [71]

    Aichholzer, O., Aurenhammer, F., Brandstatter, B., Ebner, T., Krasser, H., Magele, C., et al., Evolution strategy and hierarchical clustering, IEEE T Magn, 2002, 38(2),1041–1044.

  • [72]

    Pulido, G.T., Coello, C.A.C., Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer, In: Genetic and Evolutionary Computation Conference. Springer, 2004, p. 225–237.

  • [73]

    Park, S.Y., Lee, J.J., Improvement of a multi-objective differential evolution using clustering algorithm, In: 2009 IEEE International Symposium on Industrial Electronics. IEEE, 2009, p.1213–1217.

  • [74]

    Handa, H., Baba, M., Horiuchi, T., Katai, O., A novel hybrid framework of coevolutionary GA and machine learning, International Journal of Computational Intelligence and Applications, 2002, 2(01),33–52.

  • [75]

    Michalski, R.S., Learnable evolution model: evolutionary processes guided by machine learning, Mach Learn, 2000, 38(1-2),9–40.

  • [76]

    Jourdan, L., Corne, D., Savic, D., Walters, G., Preliminary investigation of the learnable evolution model for faster/better multiobjective water systems design, In: International Conference on Evolutionary Multi-Criterion Optimization. Springer, 2005, p. 841–855.

  • [77]

    Gaspar-Cunha, A., Vieira, A.S., A hybrid multi-objective evolutionary algorithm using an inverse neural network, 2004, p. 25–30.

  • [78]

    Hu, X.B., Huang, X.Y., Solving TSP with characteristic of clustering by ant colony algorithm, Acta Simulata Systematica Sinica, 2004, 12,014.

  • [79]

    Senjyu, T., Saber, A.Y., Miyagi, T., Shimabukuro, K., Urasaki, N., Funabashi, T., Fast technique for unit commitment by genetic algorithm based on unit clustering, HJEEE Proceedings-Generation, Transmission and Distribution, 2005, 152(5),705–713.

  • [80]

    Barreto, S., Ferreira, C., Paixao, J., Santos, B.S., Using clustering analysis in a capacitated location-routing problem, Eur J Oper Res, 2007, 179(3),968–977.

  • [81]

    Adibi, M.A., Shahrabi, J., A clustering-based modified variable neighborhood search algorithm for a dynamic job shop scheduling problem, Int J Adv Manuf Tech, 2013, 70(9),1955–1961.

  • [82]

    Lee, C., Gen, M., Tsujimura, Y., Reliability optimization design using a hybridized genetic algorithm with a neural-network technique, IEICE T Fund Electr, 2002, 85(2),432–446.

  • [83]

    Marim, L.R., Lemes, M.R., Dal Pino Jr., A., Neural-network-assisted genetic algorithm applied to silicon clusters, Phys Rev A, 2003, 67(3).

  • [84]

    Auger, A., Hansen, N., Performance evaluation of an advanced local search evolutionary algorithm, In: 2005 IEEE congress on evolutionary computation, vol. 2. IEEE, 2005, p.1777–1784.

  • [85]

    Rice, J.R., The algorithm selection problem, Adv Comput, 1976, 15,65–118.

  • [86]

    Smith-Miles, K.A., Cross-disciplinary perspectives on meta-learning for algorithm selection, ACM Computing Surveys, 2009, 41(1),6.

  • [87]

    Smith-Miles, K., Baatar, D., Wreford, B., Lewis, R., Towards objective measures of algorithm performance across instance space, Comput Oper Res, 2014, 45,12–24.

  • [88]

    Smith-Miles, K., Towards insightful algorithm selection for optimisation using meta-learning concepts, In: WCCI 2008: IEEE World Congress on Computational Intelligence. IEEE, 2008, p. 4118–4124.

  • [89]

    Kanda, J.Y., de Carvalho, A.C.P.L.F., Hruschka, E.R., Soares, C., Using meta-learning to recommend meta-heuristics for the traveling salesman problem, In: Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on, vol.1.IEEE, 2011, p. 346–351.

  • [90]

    Burke, E.K., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., Woodward, J.R., A classification of hyper-heuristic approaches, In: Handbook of metaheuristics. Springer, 2010, p. 449–468.

  • [91]

    Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G.,Özcan, E., et al., Hyper-heuristics: a survey of the state of the art, Eur J Oper Res, 2013, 64(12),1695–1724.

  • [92]

    Thabtah, F., Cowling, P., Mining the data from a hyperheuristic approach using associative classification, Expert Syst Appl, 2008, 34(2),1093–1101.

  • [93]

    Berberoğlu, A., Uyar, A.Ş., A hyper-heuristic approach for the unit commitment problem, In: European Conference on the Applications of Evolutionary Computation. Springer, 2010, p.121–130.

  • [94]

    Li, J., Burke, E.K., Qu, R., Integrating neural networks and logistic regression to underpin hyper-heuristic search, Knowl-based Syst, 2011, 24(2), 322–330.

  • [95]

    Burke, E.K., Petrovic, S., Qu, R., Case-based heuristic selection for timetabling problems, J Sched, 2006, 9(2),115–132.

  • [96]

    Ortiz-Bayliss, J.C., Terashima-Marín, H., Conant-Pablos, S.E., A supervised learning approach to construct hyper-heuristics for constraint satisfaction, In: Mexican Conference on Pattern Recognition. Springer, 2013, p. 284–293.

  • [97]

    Asta, S., Ozcan, E., An apprenticeship learning hyper-heuristic for vehicle routing in HyFlex, In: 2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS). 2014, p. 65–72.

  • [98]

    Tyasnurita, R., Ozcan, E., Asta, S., John, R., Improving performance of a hyper-heuristic using a multilayer perceptron for vehicle routing, In: 15th Annual Workshop on Computational Intelligence. Lancaster, UK: Springer, 2015,.

  • [99]

    Asta, S., Ozcan, E., Curtois, T., A tensor based hyper-heuristic for nurse rostering, Knowl-based Syst, 2016, 98, 185–199.

  • [100]

    Cadenas, J.M., Garrido, M.C., Muñoz, E., Using machine learning in a cooperative hybrid parallel strategy of metaheuristics, Inform Sciences, 2009, 179(19),3255–3267.

  • [101]

    Asta, S., Machine learning for improving heuristic optimisation, Ph.D. thesis, The University of Nottingham, UK, 2015.

  • [102]

    Martin, S., Ouelhadj, D., Beullens, P., Ozcan, E., Juan, A.A., Burke, E., A multi-agent based cooperative approach to scheduling and routing, Eur J Oper Res, 2016, 254(1),169–178.

  • [103]

    Pelikan, M., Goldberg, D.E., Lobo, F.G., A survey of optimization by building and using probabilistic models, Comput Optim Appl, 2002, 21(1),5–20.

  • [104]

    Baluja, S., Population-based incremental learning. A method for integrating genetic search based function optimization and competitive learning, Tech. Rep., DTIC Document, 1994.

  • [105]

    Harik, G.R., Linkage learning via probabilistic modeling in the ECGA, Tech. Rep. 99010, Illinois Genetic Algorithms Laboratory, 1999.

  • [106]

    Mühlenbein, H., Paass, G., From recombination of genes to the estimation of distributions I. Binary parameters, In: International Conference on Parallel Problem Solving from Nature. Springer, 1996, p.178–187.

  • [107]

    De Bonet, J.S., Isbell, C.L., Viola, P., MIMIC: finding optima by estimating probability densities, In: Advances in neural information processing systems. Morgan Kaufmann publishers, 1997, p. 424–430.

  • [108]

    Pelikan, M., Mühlenbein, H., The bivariate marginal distribution algorithm, In: Advances in Soft Computing. Springer, 1999, p. 521–535.

  • [109]

    Harik, G.R., Lobo, F.G., Goldberg, D.E., The compact genetic algorithm, IEEE T Evolut Comput, 1999, 3(4), 287–297.

  • [110]

    Mühlenbein, H., Mahnig, T., Rodriguez, A.O., Schemata, distributions and graphical models in evolutionary optimization, J Heuristics, 1999, 5(2), 215–247.

  • [111]

    Pelikan, M., Goldberg, D.E., Cantu-Paz, E., Linkage problem, distribution estimation, and Bayesian networks, Evol Comput, 2000, 8(3),311–340.

  • [112]

    Euchi, J., Hybrid estimation of distribution algorithm for a multiple trips fixed fleet vehicle routing problems with time windows, International Journal of Operational Research, 2014, 21(4),433.

  • [113]

    Wang, J., Tang, K., Lozano, J., Yao, X., Estimation of distribution algorithm with stochastic local search for uncertain capacitated arc routing problems, IEEE T Evolut Comput, 2015, 20(c),1–1.

  • [114]

    Ceberio, J., Irurozki, E., Mendiburu, A., Lozano, J.A., A review on estimation of distribution algorithms in permutation-based combinatorial optimization problems, Pattern Recogn, 2012, 1(1),103–117.

  • [115]

    Gumustekin, S., Senel, T., Cengiz, M.A., A comparative study on Bayesian optimization algorithm for nutrition problem, J Food Nutr Res, 2014, 2(12),952–958.

  • [116]

    Escalante, H.J., Ponce-López, V., Escalera, S., Baró, X., Morales-Reyes, A., Martínez-Carranza, J., Evolving weighting schemes for the bag of visual words, Neural Comput App1, 2016, 1–15.

  • [117]

    Stein, G., Chen, B., Wu, A.S., Hua, K.A., Decision tree classifier for network intrusion detection with GA-based feature selection, In: Proceedings of the 43rd annual Southeast regional conference-Volume 2. ACM, 2005, p.136–141.

  • [118]

    Sörensen, K., Janssens, G.K., Data mining with genetic algorithms on binary trees, Eur J Oper Res, 2003, 151(2), 253–264.

  • [119]

    Fernández Caballero, J.C., Martinez, F.J., Hervas, C., Gutierrez, P.A., Sensitivity versus accuracy in multiclass problems using memetic pareto evolutionary neural networks, IEEE T Neural Network, 2010, 21(5),750–770.

  • [120]

    Huang, C.L., Wang, C.J., A GA-based feature selection and parameters optimization for support vector machines, Expert Syst Appl, 2006, 31,231–240.

  • [121]

    Garrett, D., Peterson, D.A., Anderson, C.W., Thaut, M.H., Comparison of linear, nonlinear, and feature selection methods for EEG signal classification, IEEE T Neur Sys Reh, 2003, 11(2),141–144.

  • [122]

    García-Nieto, J., Alba, E., Jourdan, L., Talbi, E., Sensitivity and specificity based multiobjective approach for feature selection: application to cancer diagnosis, Inform Process Lett, 2009, 109(16),887–896.

  • [123]

    Yusta, S.C., Different metaheuristic strategies to solve the feature selection problem, Pattern Recogn Lett, 2009, 30(5),525–534.

  • [124]

    Aguilera, J.J., Chica, M., del Jesus, M.J., Herrera, F., Niching genetic feature selection algorithms applied to the design of fuzzy rule-based classification systems, In: 2007 IEEE International Fuzzy Systems Conference. IEEE, 2007, p.1–6.

  • [125]

    Xue, B., Cervante, L., Shang, L., Zhang, M., A particle swarm optimisation based multi-objective filter approach to feature selection for classification, Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, p. 673–685.

  • [126]

    Candelieri, A., A hyper-solution framework for classification problems via metaheuristic approaches, 4OR, 2011, 9(4), 425–428.

  • [127]

    Yao, X., Evolving artificial neural networks, Proceedings of the IEEE, 1999, 87(9), 1423–1447.

  • [128]

    Stanley, K.O., Miikkulainen, R., Evolving neural networks through augmenting topologies, Evol Comput, 2002, 10(2), 99–127.

  • [129]

    Turner, A.J., Miller, J.F., NeuroEvolution: evolving heterogeneous artificial neural networks, Evolutionary Intelligence, 2014, 7(3),135–154.

  • [130]

    Carvalho, A.R., Ramos, F.M., Chaves, A.A., Metaheuristics for the feedforward artificial neural network (ANN) architecture optimization problem, Neural Comput Appl, 2011, 20(8), 1273–1284.

  • [131]

    Das, S., Abraham, A., Konar, A., Metaheuristic pattern clustering–an overview, In: Metaheuristic clustering. Springer, 2009, p. 1–62.

  • [132]

    Selim, S.Z., Alsultan, K., A simulated annealing algorithm for the clustering problem, Pattern Recogn, 1991, 24(10), 1003–1008.

  • [133]

    Shelokar, P.S., Jayaraman, V.K., Kulkarni, B.D., An ant colony approach for clustering, Anal Chim Acta, 2004, 509(2), 187–195.

  • [134]

    De Jong, K.A., Spears, W.M., Gordon, D.F., Using genetic algorithms for concept learning, In: Genetic algorithms for machine learning. Springer, 1993, p. 5–32.

  • [135]

    Chiou, Y.C., Lan, L.W., Genetic clustering algorithms, Eur J Oper Res, 2001,135(2),413–427.

  • [136]

    Garai, G., Chaudhuri, B.B., A novel genetic algorithm for automatic clustering, Pattern Recogn Lett, 2004, 25(2), 173–187.

  • [137]

    Marinakis, Y., Marinaki, M., Matsatsinis, N., A stochastic nature inspired metaheuristic for clustering analysis, International Journal of Business Intelligence and Data Mining, 2008, 3(1),30–44.

  • [138]

    Govindarajan, K., Somasundaram, T.S., Kumar, V.S., Particle swarm optimization (PSO)-based clustering for improving the quality of learning using cloud computing, In: 2013 IEEE 13th International Conference on Advanced Learning Technologies. IEEE, 2013, p. 495–497.

  • [139]

    Banu, P.K.N., Andrews, S., Gene clustering using metaheuristic optimization algorithms, International Journal of Applied Metaheuristic Computing, 2015, 6(4),14–38.

  • [140]

    Ferone, D., Facchiano, A., Marabotti, A., Festa, P., A new GRASP metaheuristic for biclustering of gene expression data, PeerJ Preprints, 2016,.

  • [141]

    Hruschka, E.R., Campello, R.J., Freitas, A., A survey of evolutionary algorithms for clustering, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2009, 39(2),133–155.

  • [142]

    Kurada, R.R., Pavan, K.K., Rao, A., A preliminary survey on optimized multiobjective metaheuristic methods for data clustering using evolutionary approaches, International Journal of Computer Science & Information Technology, 2013, 5.

  • [143]

    Carvalho, D.R., Freitas, A.A., A genetic algorithm for discovering small disjunct rules in data mining, Appl Soft Comput, 2002, 2(2),75–88.

  • [144]

    Freitas, A., Data mining and knowledge discovery with evolutionary algorithms, Advances in Evolutionary Computation, 2002, 105,819–845.

  • [145]

    Khabzaoui, M., Dhaenens, C., Talbi, E.G., A multicriteria genetic algorithm to analyze DNA microarray data, In: Cec2004: Proceedings of the 2004 Congress on Evolutionary Computation, Vols 1 and 2. 2004, p.1874–1881.

  • [146]

    Khabzaoui, M., Dhaenens, C., Talbi, E.G., Combining evolutionary algorithms and exact approaches for multi-objective knowledge discovery, RAIRO Operations Research, 2008, 42(1),69–83.

  • [147]

    Ishida, C.Y., Pozo, A., Goldbarg, E., Goldbarg, M., Multiobjective optimization and rule learning: subselection algorithm or meta-heuristic algorithm?, In: Innovative applications in data mining. Springer, 2009, p. 47–70.

  • [148]

    Yang, S., Jiang, Y., Nguyen, T.T., Metaheuristics for dynamic combinatorial optimization problems, IMA Journal of Management Mathematics,2013, 24(4),451–480.

  • [149]

    Juan, A.A., Faulin, J., Jorba, J., Riera, D., Masip, D., Barrios, B., On the use of Monte Carlo simulation, cache and splitting techniques to improve the Clarke and Wright savings heuristics, J Oper Res Soc, 2011, 62(6), 1085–1097.

  • [150]

    Clarke, G., Wright, J.W., Scheduling of vehicles from a central depot to a number of delivery points, Oper Res, 1964, 12(4),568–581.

OPEN ACCESS

Journal + Issues

Open Mathematics is a fully peer-reviewed, open access, electronic journal that publishes significant and original works in all areas of mathematics. The journal publishes both research papers and comprehensive and timely survey articles. Open Math aims at presenting high-impact and relevant research on topics across the full span of mathematics.

Search