Test suites are inevitable to develop new algorithms for global optimization, as well as study existing optimization methods. Tests for optimization are available in many forms. There are several previous studies containing the collections of test problem. To the best of our knowledge the most comprehensive test suite for bound-constrained global optimization is considered in . Some test suites are available online as modules for various programming languages. It is worth to mention CUTEr , which is a versatile testing environment for optimization and linear algebra solvers. The package contains a collection of test problems, along with Fortran 77, Fortran 90/95 and Matlab tools intended to help developers design, compare and improve new and existing solvers. Many Internet sources provide collections of global optimization benchmarks, for example . In addition, there are automated generators of test functions in .
Also, there are several techniques for comparing global optimization algorithms. For example,  introduces a methodology allowing one to compare stochastic and deterministic methods. The article  is dedicated to a comparison between nature-inspired metaheuristic and deterministic algorithms. The systematic review of the benchmarking process of optimization algorithms is given in .
It is important to note that in , functions were collected from various literary sources. Our careful examination showed that in the process of rewriting errors were made in more than 30% of the tests. Thus, it is very important to verify the test suite. We used for these purposes the deterministic global optimization approach.
In addition to calculating the value of an objective function, various methods on estimating a function on a given box are used in the methods of global optimization. For example, the evaluation of the enclosing interval of a function is calculated by the interval analysis method  or by using Lipschitzian properties. Manual programming of these methods is time consuming and error-prone. We automated these tasks: the interval bounds are computed based on the same internal representation as for the value.
This article describes the approach to creating benchmark functions that calculate the value of the objective function at a given point and also to automatically obtain the interval bounds of the function on a given box using a single description of a mathematical expression. As a result of this approach, the test suite of 150 C++ template functions was created. Practically all the functions for the test suite were taken from  and checked with the original sources and automatically verified with the interval global optimization method. In total, four types of unit tests were developed based on the Google C ++ Testing Framework.
The distinguished feature of our approach is that we verified the test suite using global optimization methods and that we provide a flexible C++ interface to benchmarks. This interface supports standard methods for computing objective’s values and gradients as well as more complex but highly demanded in global optimization interval estimates.
2 Description of the benchmark functions
Let’s exemplify our approach on the Rosenbrock (1) and DropWave (2) functions.
The function (1) can have an arbitrary number of variables while the second one has exactly two parameters. The C ++ template function for a Rosenbrock function is depicted at Figure 1.
Let’s look into this section of code in detail. The input parameter n is a number of function variables of the type T. The template parameter T is either a C++ standard real (double or float) type or an Interval<> type for working with interval estimates (Section III). The function returns an object of type Expr
The implementation of the function body is based on the mathematical expressions library outlined in Section II. The variable at Figure 1 of type Expr
The iterator constructor has two input parameters meaning the range of the index variable. Note that the index of the variable does not necessary has an Iterator type. It can also be any integral expression. The example is given at the Figure 1: initially, the variable is initialized by the iterator and then the expression is used as an index to access the elements of . The loopSum returns the expression implementing the summation using the iterator i. Another method sqr returns a mathematical expression of type Expr
Rosenbrock function depicted at the Figure 1 returns an object representing a mathematical expression to the calling code. Note that the function description is specified once. The resulting expression allows to calculate the value of the function/gradient at given points or calculate the interval estimate of the function/gradient over a box.
Besides a function a benchmark contains constraints. In this particular case we consider only interval constraints. The remaining information for the benchmark is stored as a meta description (Figure 2).
In the meta description the following JSON fields are used:
- description – name of the global optimization benchmark;
- anyDim – true or false flag describing the type of the dimensionality. If the flag is set to true, then the size of the parameter’s vector can be arbitrary and must be set by a user. If the flag is false, then the size of the space is specified by the dim field Figure 4.
- dim – The number of an objective function parameters. This field should be empty if the flag anyDim is set to true.
- bounds – An array that describes the interval bounds on parameters. Each element of this array stores the left and right boundaries for the function parameters. These boundaries are denoted by a and b fields respectively. In case the flag anyDim is true, the array has only one element. All other elements are assumed to be equal to the first Figure 2.
- globMin – An array storing the coordinates of the global minimum point Figure 4. Note, the array is necessary because the function may have multiple global minima. Each element x consists of an array that stores the elements of the coordinate of the global minimum point. In case the flag anyDim is set to true, this array has only one element Figure 2. All other elements are assumed to be equal to the first.
- globMin – the global minimum value of the function.
- comment - an optional field with any additional information about a function.
Figure 3 shows the description of the DropWave function (2). This function has strictly defined dimension (equal to 2) and does not have an input parameter like Rosenbrock function (1).
It should be noted that if a description of a function formula is large, then it is more convenient to break it into several parts. For example, DropWave function (Figure 3) has intermediate variables a, b and c, which are initialized by some part of the formula. This makes it possible to reduce the number of parentheses and improve readability.
3 The mathematical expression library
Mathematical expression library was developed in C++ programming language. Though C++ is not that common in scientific computing as, for instance, Python and Fortran we believe it is still important to support it because a significant fraction of researchers and practitioners use this language in their work. C++ is significantly faster than Python and it supports templates and other advanced object-oriented capabilities. Such capabilities are crucial for processing polymorphic expressions and building extensible tools for computing function values and bounds. Another reason why we use C++ is a possibility of integration with existing code and libraries in our department.
The library of mathematical expressions is represented by the template class Expr
The second constructor of Expr
Standard mathematical operations such as addition, subtraction, multiplication etc. are implemented using C++ standard operators overloading techniques. To describe various mathematical expressions, the library includes elementary mathematical functions. The library implements trigonometric functions sin, cos, tg, ctg, as well as inverse trigonometric functions acos, asin, atg, actg. In addition, the exponential function exp, the logarithmic functions ln and log, the function for computing the power pow, the absolute value abs, the minimum function min and the maximum function max are supported. The IfThen ternary operation is implemented for the organization of conditional logic in mathematical expressions. The functions LoopSum and LoopMul implement summation and multiplication respectively. To print a mathematical expression of type Expr
The calc method Figure 7 was implemented in the Expr
The idea of an algorithm in our library follows the concept of the design pattern strategy , where the same data (in our case, a mathematical expression) can be processed by different algorithms. Currently, four types of algorithms are implemented: FuncAlg
The library of mathematical expressions allows to calculate the interval estimation of a function on a given box. The calc method described above should be used for this purpose with InterEvalAlg
To calculate the gradient of a function, first it’s necessary to create a mathematical expression by passing the type ValDer
The calculation of the interval estimate of the function gradient is shown in Figure 11. First, it’s necessary to create the mathematical expression by passing the type IntervalDer
4 The Interval arithmetic library
As described in Section III, the library of mathematical expressions can calculate the interval estimations of a function. To achieve this, the algorithm implemented by the class InterEvalAlg
Currently, we have implemented extended version of Interval
5 The test set of functions and the test environment
The test suite contains 150 well known global optimization functions borrowed from . The entire collection is implemented in the form of C++ template functions discussed above Figure 1 and 3. All necessary definitions are located in testfuncs.h file, which can be easily added to any C++ application. The DescFuncReader class implements reading the metadata from a JSON file whose format was considered earlier Figure 2 and 4. The class has getdesr method that returns descfunc structure with the benchmark metadata retrieved by a function key. A list of all the keys lies in the keys.hpp file in Keys structure. An example of using this class will be shown below when describing the environment for testing (Subsection A).
The entire test set has been thoroughly tested and verified. We developed four types of unit tests:
- Test for the equality of the calculated and expected value of a function.
- The test for the belonging of the value of a function to the interval.
- The test for the equality of the found and expected global minimum of a function.
- The test for the equality of the calculated and expected value of the function gradient.
We used Google testing framework (gtest)  to develop unit tests. About 600 tests of different types were implemented. All tests are characterized by the use of a dedicated common part and a short call of this common part from the body of a test function. Below we describe these tests in detail.
Figure 14 shows an example of testing the value of the Rosenbrock function. The goal of the test is to check the equality of the calculated and expected values at the global minimum point of the function. The Rosenbrock function is called in the body of the test function TestRosenbrock to create a mathematical expression for this function. Next, the Test method of FuncsTest class is called, which is common for all unit tests. The parameters of this method are the key (a unique name of a benchmark), a mathematical expression and optional the number of variables parameter. The latter is specified if anyDim flag is true (see Figure 2).
The DescFuncReader class object is created in the constructor of the FuncsTest class. This object is required to read the metadata function. The constructor of this class is passed the path to JSON file. The body of the Test function is given in Figure 14. First, getdesr method of the DescFuncReader class is called to get the metadata for the Rosenbrock benchmark. Next, we get the first point of the global minimum globMinX. Further, the global minimum value is calculated at that point. Then we get the expected value globMinY of the global minimum of the function. At the end of Test function, ASSERT_NEAR macro of gtest environment is called to compare the calculated value and the expected value of globMinY. These values should not differ more than the maximum permissible difference EPSILON equal to 0.001.
Figure 15 shows an example where it is checked that the particular value belongs to the enclosing interval. The test is organized as follows:
- The value of a function is calculated at a random point in a box.
- A new box is created around the generated point with a given length of edges.
- The interval estimation of a function is calculated for the box obtained at the step 2.
- Assertions are called to check that the value of a function obtained at a random point at the step 1 belongs to the interval estimation of the function computed at the step 3.
This next type of tests compares the global minimum value obtained with the help of the non-uniform covering method and the global minimum value documented in the literature . The entire set of 150 functions was tested on this type of tests. We used the interval lower bounds for the non-uniform coverings method. The assertion checks whether the global minimum found differs from the expected value not more than the specified accuracy EPSILON.
The basis of the test comparing the computable and expected value of the gradient is the calculation of a derivative of a function by an approximate method of finite differences and an exact method based on the automatic differentiation . The test compares the results of calculating the gradient of the function at an arbitrary point on a given box. It works as follows:
- Calculate the value of the function and its’ gradient at a random point of a given box using the ValDerAlg
- Calculate the value of the function in the same random point using the FuncAlg
- The values of the function obtained by the algorithms ValDerAlg
and FuncAlg should not differ more than EPSILON.
- Next for each coordinate of the random point, get a new point by adding DELTA to the current coordinate. Calculate the value of the function at this point using the algorithm FuncAlg
- Calculate the partial derivative by dividing by DELTA the difference between the value of the function at the random point and the new point.
- Compare the value of the partial derivative obtained in step 1 and in step 5. The comparison is performed in relative terms expressed.
- Return to step 4 until all coordinates are checked.
Figure 16 shows an example of testing the gradient of the Rosenbrock function.
In this paper, we studied the implementation and verification of tests for bound-constrained global optimization. A test suite of 150 functions was developed with the help of this approach. The suite was verified by a basic global optimization solver.
We plan to support the automatic calculation of the second derivatives of the function, as well as their interval estimates as a further development of the libraries described above. The techniques of fast automatic differentiation  will be used to achieve these goals. This functionality will allow the evaluation of alternative function estimates, for example, based on Lipschitz constant . It is also planned to extend our approach to multi-objective problems to enable existing methods of deterministic global multi-criteria optimization  be employed.
The test suite can be used to compare various methods of global or local optimization. C++ source code of all libraries, as well as a test set of functions can be downloaded from GitHub at https://github.com/alusov/mathexplib.git
This study was supported by Ministry of Science and Education of Republic of Kazakhstan, project 0115PK00554, Russian Fund for Basic Research, project 17-07-00510 A, Leading Scientific Schools project NSH-8860.2016.1, Project III of the division of mathematics of RAS
Jamil, M., & Yang, X. S. (2013). A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2), 150-194.
Global Optimization Benchmarks and Adaptive Memory Programming for Global Optimization. Web: http://infinity77.net/global_optimization/genindex.html
Gaviano, M., Kvasov, D. E., Lera, D., & Sergeyev, Y. D. (2003). Algorithm 829: Software for generation of classes of test functions github with known local and global minima for global optimization. ACM Transactions on Mathematical Software (TOMS), 29(4), 469-480
Sergeyev, Y. D., Kvasov, D. E., & Mukhametzhanov, M. S. (2017). Operational zones for comparing metaheuristic and deterministic one-dimensional global optimization algorithms. Mathematics and Computers in Simulation, 141, 96-109.
Kvasov, D. E., & Mukhametzhanov, M. S. (2017). Metaheuristic vs. deterministic global optimization algorithms: The univariate case. Applied Mathematics and Computation. 318, 245-259.
Vahid Beiranvand, Warren Hare, Yves Lucet. (2017) Best practices for comparing optimization algorithms. Optimization and Engineering, 18 (4), pp 815–848.
Hansen, Eldon, and G. William Walster, eds. Global optimization using interval analysis: revised and expanded. Vol. 264. CRC Press, 2003.
Erich Gamma, Ralph Johnson, Richard Helm, John Vlissides Design Patterns Elements of Reusable Object-Oriented Software 2001.
Interval Arithmetic Library. Boost C++ libraries. Web: http://www.boost.org/doc/libs/1_65_1/libs/numeric/interval/doc/interval.htm
Evtushenko, Y. G. (1971). Numerical methods for finding global extrema (case of a non-uniform mesh). USSR Computational Mathematics and Mathematical Physics, 11(6), 38-54.
Kearfott, R. Baker. Rigorous Global Search: Continuous Problems. Nonconvex Optimization and Its Applications (1996)
Evtushenko, Y. G., & Zubov, V. I. (2016). Generalized fast automatic differentiation technique. Computational Mathematics and Mathematical Physics, 56(11), 1819-1833.
R.G. Strongin and Y.D. Sergeyev, Global Optimization with Non-convex Constraints: Sequential and Parallel Algorithms, Springer Science & Business Media, New York, 2013.
Evtushenko, Y. G., & Posypkin, M. A. (2014). A deterministic algorithm for global multi-objective optimization. Optimization Methods and Software, 29(5), 1005-1019.