Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Vespri, Vincenzo / Marano, Salvatore Angelo

IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2018: 0.34

ICV 2018: 152.31

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 15, Issue 1

# A new branch and bound algorithm for minimax ratios problems

Yingfeng Zhao
• Corresponding author
• School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
• School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China
• Email
• Other articles by this author:
/ Sanyang Liu
/ Hongwei Jiao
• School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China
• Other articles by this author:
Published Online: 2017-06-22 | DOI: https://doi.org/10.1515/math-2017-0072

## Abstract

This study presents an efficient branch and bound algorithm for globally solving the minimax fractional programming problem (MFP). By introducing an auxiliary variable, an equivalent problem is firstly constructed and the convex relaxation programming problem is then established by utilizing convexity and concavity of functions in the problem. Other than usual branch and bound algorithm, an adapted partition skill and a practical reduction technique performed only in an unidimensional interval are incorporated into the algorithm scheme to significantly improve the computational performance. The global convergence is proved. Finally, some comparative experiments and a randomized numerical test are carried out to demonstrate the efficiency and robustness of the proposed algorithm.

MSC 2010: 90C26; 90C30

## 1 Introduction

The minimax fractional programming (MFP) problem we studied in this paper can be formulated as the following nonlinear optimization problem $(MFP):minmax{n1(x)d1(x),n2(x)d2(x),…,np(x)dp(x)}s.t.gj(x)≤0,j=1,2,…,m,x∈X0={x∈Rn|li0≤xi≤ui0,i=1,2,⋯,n},$

where ni(x), −di(x), i = 1, 2, ⋯, p, gj(x), j = 1, 2, ⋯, m, are all convex functions and we assume that each numerator and denominator in the objective function satisfy the following condition: $ni(x)≥0,di(x)>0,∀x∈X0,i=1,2,⋯,p.$(1)

The MFP problem, which we called generalized minimax convex-concave ratios problem, is one of the most topical and useful fields in fractional programming problems, it has attracted the interest of both practitioners and researchers for many years [14]. The reason for studying the MFP problem is twoflod. The first one is that, from the practical point of view, the MFP problem has quite a lot of applications in various areas where several rates are to be optimized simultaneously, such as design of electronic circuits [5], biology to calculate malting efficient DNA [6], rural farming system [7], neural networks [8], management and finance [9] and system identification [1012], to name but a few. Another reason is that, from the research point of view, the MFP problem generalizes minimax linear ratios problem [1315] and it is neither quasiconcave nor quasiconvex, so it may have multiple local optima most of which fail to be global optimal (Example 8. and Figure 1), thus it poses significant theoretical and computational challenges. For these reasons, developing new solution methods for the problem MFP is still quite necessary and meaningful.

Figure 1

3-D image of the objective function over [0,2;0,2] in Example 7.

Over the years, several global optimization methods have been available for solving special case of the problem (MFP), include parametric programming method [16], interior-point algorithm [17], monotonic optimization approach [18] and those kinds of dinkelbach-type algorithms [19, 20]. When the numerators and denominators in the objective function of the (MFP) are all affine, Feng presents two deterministic algorithm for solving the special case of the MFP problem [14, 15]. Recently, using two-step relaxation technique, Jiao and Liu have proposed a new global optimization algorithm for solving the MFP with linear ratios and constraints [13]. Jeyakumar et al. proposed a strong duality for robust minimax fractional programming problems, when data in both of the objective and constraints are uncertainty [21]. Lai and Huang established the sufficient optimality conditions for a minimax programming problem involving p fractional n–set functions under generalized convexity [22]. Progress for the MFP problem is far advanced than mentioned above – there are many other studies about (MFP) and its special cases. Despite these various contributions, most knowledge for MFP presented in recent works focus only on optimality conditions or duality theory for special case of the MFP problem, and existing algorithm can only find its local optimal solution, or can only solve the linear form of the MFP problems. For general (MFP) investigated in this paper, to our knowledge, very few global optimization algorithms have been developed.

In this paper, we put forward an unidimensional outcome inteval branch-and-bound algorithm for globally solving the (MFP). First, a concise convex relaxation programming of the equivalent problem (EMFP) is established. Then some key operations in outcome branch and bound algorithm are described, especially the interval reduction skill can significantly improve the computational efficiency of the presented algorithm. Third, the proposed algorithm is developed by incorporating the accelerating technique as well as the adapted partition rule into the branch and bound scheme. The global convergence property is proved, numerical experimental results show that the proposed algorithm has higher computational efficiency and stronger robustness than Refs. [13] and [14, 15].

The remainder of this study is organized in the following way. In Section 2, by introducing an auxiliary variable, the MFP problem is converted into an equivalent problem (EMFP). Then the convex relaxation programming problem of the (EMFP) is established in Section 3, the linear relaxation programming for its special case, where the numerators and denominators in the objective function and the constraints are all linear, is also described in this section. Section 4 presents some key operations in branch and bound algorithm for globally solving the (MFP). An unidimensional outcome interval branch and bound algorithm for the (MFP) is introduced in Section 4. The numerical results of some test examples in recent works with the proposed algorithm are reported in Section 5, and some concluding remarks are given in the last section.

## 2 Equivalent problem

For solving the problem, we will first transform the problem (MFP) into an equivalent problem (EMFP) by associating all ratios in the objective function with an auxiliary variable. To this end, we first denote some notations as follows: $n_i≤ni(x)≤n¯i,d_i≤di(x)≤d¯i,ln+10=min1≤i≤p{n_id¯i},un+10=max1≤i≤p{n¯id_i};$

where ni, di can be easily computed by utilizing the convexity and concavity of ni(x) and di(x), respectively. Through introducing a positive auxiliary variable xn+1, we can derive the following equivalent problem (EMFP0) of the MFP problem. $(EMFP0):minxn+1s.t.ni(x)−xn+1di(x)≤0,i=1,2,⋯,pgj(x)≤0,j=1,2,…,m,x∈X0={x∈Rn|li0≤xi≤ui0,i=1,2,⋯,n},xn+1∈D0=[ln+10,un+10],$

By utilizing variable equivalence replacement tactic and changing the notation, one can thus reformulate the (EMFP0) into the following form $(EMFP):minxn+1s.t.ni(x)−xn+1(di(x)−d_i)−d_ixn+1≤0,gj(x)≤0,j=1,2,…,m,x∈X0,xn+1∈D0.$

Note that, the reason why we convert the (EMFP0) into the form (EMFP) is that by isolating a linear part from the concave function di(x) we can make the convex relaxation programming more close to the initial problem (MFP), this operation will improve the algorithm significantly in efficiency which can be vindicated by numerical experiments. Moreover, in the EMFP problem, only the middle part in the first set of constraints is nonconvex. This brings a great convenience to us for building the convex relaxation problem of the initial problem. The equivalence between the problems (MFP) and (EMFP) can be obtained in the sense of the following theorem.

#### Theorem 2.1

If $\left({x}^{\ast },{x}_{n+1}^{\ast }\right)\in {R}^{n+1}$ is a global optimal solution for the (EMFP), then x*Rn is a global optimal solution for the (MFP). Conversely, if x*Rn is a global optimal solution for the (MFP), then $\left({x}^{\ast },{x}_{n+1}^{\ast }\right)\in {R}^{n+1}$ is a global optimal solution for the (EMFP), where ${x}_{n+1}^{\ast }=max\left\{\frac{{n}_{1}\left({x}^{\ast }\right)}{{d}_{1}\left({x}^{\ast }\right)},\phantom{\rule{thinmathspace}{0ex}}\frac{{n}_{2}\left({x}^{\ast }\right)}{{d}_{2}\left({x}^{\ast }\right)},\dots ,\frac{{n}_{p}\left({x}^{\ast }\right)}{{d}_{p}\left({x}^{\ast }\right)}\right\}.$

#### Proof

The proof of this theorem can be easily followed according to the definition of the problems (EMFP0) and (EMFP), therefore, it is omitted here.□

To solve the MFP problem, based on Theorem 2.1, we only need to solve the EMFP problem instead. Hence from now on, we will assume that the initial problem (MFP) has been of the form like (EMFP), and then our main work will focus on how to globally solve the (EMFP).

## 3 Convex relaxation programming problem

In this section, we construct a convex programming relaxation of the (EMFP) only with relaxed auxiliary variable in an outcome interval D. For the ease of presentation, we assume that D = [ln+1, un+1] represents either the initial outcome interval D0 of (EMFP), or modified interval generated from the branching operation in the algorithm. Next, we describe the construction process of the relaxation convex programming (RMFP) of the (EMFP) corresponding to D.

For this, we only need to consider the nonconvex constraints in (EMFP). As a matter of convenience, we denote these constraint functions with notation and number them as follows: $hi(x,xn+1)=ni(x)−xn+1(di(x)−d_i)−d_ixn+1$(2)

Assume v, v denote the best upper bound and lower bound of the objective value known so far (if no such value is founded, set v := +∞, v := −∞), respectively, and let M = min{v, un+1}, then we can obtain an underestimate function of hi(x) which can be formulated as: $h~i(x,xn+1)=ni(x)−M(di(x)−d_i)−d_ixn+1=ni(x)−Mdi(x)+Md_i−d_ixn+1,$(3)

based on the definition of M and the convexity and concavity of ni(x) and di(x), it is not hard to see that ${\stackrel{~}{h}}_{i}\left(x\right)$ is convex and provides a very good lower approximation for hi(x), i = 1, 2, ⋯, p. Thus, according to the above discussion, for arbitrary region Xk × DkX0 × D0, we can describe the convex relaxation programming problem (RMFP) corresponding to outcome interval D as follows: $(RMFP):minxn+1s.t.h~i(x,xn+1)≤0,i=1,2,⋯,p,gj(x)≤0,j=1,2,…,m,x∈X0,xn+1∈D.$

#### Remark 3.1

The relaxation operation we used is very simple and easy to implement, that is we only need to compare two real numbers in each iteration for constructing the relaxation problem. It is quite different from usual branch and bound algorithms most of which utilize concave envelope or linearity technique to construct relaxation, and this will greatly reduce the computation time for establishing underestimate function of non-convex functions in the initial 5 source problem. for establishing underestimate function of nonconvex functions in the initial source problem. Furthermore, since relaxation operation occurs only in the unidimensional outcome interval, we can only subdivide the outcome interval in branching process. This will significantly reduce the numbers of nods in the algorithm so that the outcome interval branch and bound algorithm we presented performs better than usual branch and bound algorithms which branches in the n—dimensional variable space.

#### Remark 3.2

Condition (1) can be relaxed to di(x) ≠ 0, ∀xX0, i = 1, 2, ⋯; p, when functions ni(x) and di(x) are all linear, that is, the problem we considered in this paper generalized the minimax of linear ratios problem appeared in [1315]. In fact, when ni(x) and di(x) are all linear functions, suppose instead $di(x)=∑j=1nαijxij+γi=∑j∈T+αijxi+∑j∈T−αijxij+γi>0,i=1,2,⋯,p,$(4)

where T+ = {j | αij > 0}, T = {j | αij < 0}. If not, we can substitute the ratio $\frac{{n}_{i}\left(x\right)}{{d}_{i}\left(x\right)}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}with\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\frac{-{n}_{i}\left(x\right)}{-{d}_{i}\left(x\right)},$ then it will turn into the case identified by(4). Sequentially, the underestimate function ${\stackrel{~}{h}}_{i}\left(x\right)$ can be constructed in the following way $h~i(x,xn+1)=ni(x)−Mi1∑j∈T+αijxi−Mi2∑j∈T−αijxi−γixn+1,$(5)

where Mi1 = min{v, un+1}, Mi2 = max{v, ln+1}. It is much simpler and more economical to use this relaxation technique in branch and bound algorithm than that in [14, 15] where the linear relaxation programming is constructed by utilizing convex and (or) concave envelopes of bilinear functions, and the numerical test will illustrate this.

#### Theorem 3.3

For any xX0, xn+1D, let Δ = |un+1ln+1|, then the following conclusion will be established: $h~i(x)≤hi(x),limΔ→0|h~i(x)−hi(x)|=0.$

#### Proof

The proof of this theorem can be easily verified by the expression of ${\stackrel{~}{h}}_{i}\left(x,{x}_{n+1}\right)$ in (3), therefore, it is omitted here.□

Theorem 2.1 and Theorem 3.3 ensure that ${\stackrel{~}{h}}_{i}\left(x,{x}_{n+1}\right)$ are good underestimations for hi(x, xn+1) and will provide valid lower bounds for the optimal value of the initial problem (MFP).

## 4 Algorithm and its convergence

In this section, we will first present some key operations of the branch and bound algorithm, then the global optimization algorithm based on the former convex relaxation programming will be proposed, and finally, we will prove the global convergence of the present algorithm in theory.

## 4.1 Key operations

There are several key operations in reduced outcome interval branch and bound algorithm. The focus of these operations is to find better upper and lower bounds of the objective value at a faster pase. In our algorithm, there are three base operations: branching, reduction and bounding.

Choosing a suitable partition rule is a critical element in guaranteeing convergence to a global optimal solution, here we choose a simple and adapted bisection rule which is sufficient to ensure convergence. For any subproblem identified by hyper-rectangle $X0×D={(x,xn+1)|x∈X0,ln+1≤xn+1≤un+1}⊂X0×D0,$

let q = (ln+1 + v)/2, then partition X0 × D will be subdividing into two regions ${X}^{0}×\overline{D}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{X}^{0}×\overline{\overline{D}},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{where}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\overline{D}=\left[{l}_{n+1},\phantom{\rule{thinmathspace}{0ex}}q\right],\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\overline{\overline{D}}=\left[q,{u}_{n+1}\right].$ Note that we only partition the outcome unidimensional interval D, the n–dimensional variable space is never partitioned in the algorithm, thus the number of partition set is much smaller than in case of those branch and bound algorithms using pure variable space partition rule. What’s more, the adapted partition technique corresponding to the current objective value will also improve the performance of the algorithm.

Moreover, the partition set generated from the branching operation can be further reduced by the special structure of the (EMFP). Actually, based on Theorem 2.1, we know that the EMFP problem has the same optimal value with the MFP problem, so when t < v, or t > v, the optimal value will not be obtained, so the partition sets D = [ln+1, q] and $\overline{\overline{D}}=\left[q,{u}_{n+1}\right]$ can be further reduced into $D¯=[max{v_,ln+1},q]andD¯¯=[q,min{v¯,un+1}],$(6)

respectively. By adding this skill into the branch and bound scheme, the performance of the algorithm will be improved obviously.

The bounding operation aims at obtaining an upper and (or) lower bound of the objective value for the initial problem (MFP). Let LB(Dk, µ) be the optimal value of the (RFP) on the μth sub-region X0 × Dk, μ and xk, μ = x(X0 × Dk, μ) be the corresponding optimal solution in the kth iteration of the algorithm. Then LBk = min{LB(Dk, μ) | μ = 1, 2, ⋯, sk} will be the new lower bound of the objective value for the (MFP), and each optimal solution xk, μ for the (RFP) is a feasible solution for the (MFP). We choose UBk = $min\left\{\underset{\mu }{max}\left\{\frac{{n}_{i}\left({x}^{k,\mu }\right)}{{d}_{i}\left({x}^{k,\mu }\right)}\right\},\phantom{\rule{thinmathspace}{0ex}}\overline{v}\right\}$ as the new best upper bound.

## 4.2 Algorithm statement and convergence property

Based upon the results and key operations given in the above sections, the new branch and bound algorithm for solving the EMFP problem, and hence the MFP problem, can be described as follows.

Step 0. (Initialization) Choose convergence tolerance ϵ > 0, set iteration counter k := 0 and the initial active nod as Ω0 = X0 × D0.

Solve the initial convex relaxation problem (RMFP) over region X0 × D0, if the (RMFP) is not feasible then there is no feasible solution for the (MFP). Else, denote the optimal value and solution as f0 and ${x}_{opt}^{0}$, respectively, then we can obtain the initial upper and lower bound of the optimal value for the MFP problem, that is, v := $f\left({x}_{opt}^{0}\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\underset{_}{v}:={f}_{0},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{where}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}f\left(x\right)=max\left\{\frac{{n}_{i}\left(x\right)}{{d}_{i}\left(x\right)}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}i=1,2,\cdots ,\phantom{\rule{thinmathspace}{0ex}}p\right\}.$ And then, if v − v < ϵ, the algorithm can stop, and ${x}_{opt}^{0}$ is the optimal solution of the (MFP), otherwise proceed to step 1.

Step 1. (Reduction) Substitute the interval end points ${l}_{n+1}^{k}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{u}_{n+1}^{k}$ by utilizing the reduction technique (6).

Step 2. (Branching) Partition Dk into two new sub-intervals according to the adapted rule described in section 4.1. Add the new nods into the active nods set and denote the set of new partitioned intervals as ${\stackrel{~}{D}}^{k}$.

Step 3. (Bounding) For each subregion still of interest X0 × Dk, μX0 × D, μ = 1, 2, …, sk, obtain the optimal solution and value for the RMFP problem corresponding outcome interval Dk, μ by solving some convex programming, if LB(Dk, μ) > v, or f(xk, μ) < v then delete Dk, μ from ${\stackrel{~}{D}}^{k}$. Otherwise, let LBk = min{LB(Dk, μ) | μ = 1, 2, ⋯, sk} and UBk = min{f(xk, μ), v}, then we can update the lower and upper bounds as follows $v_=:max{v_,LBk},v_=:min{v¯,UBk},$

and let $Ωk=(Ωk∖X)⋃D~k$

Step 4. (Termination) Let $Ωk+1=Ωk∖{X|f(X)−LB(X)≤ϵ,X∈Ωk}.$

If Ωk+1 = ∅, the algorithm can be stopped, v is the global optimal value for (MFP). Otherwise, set k : = k + 1, select Xk from Ωk with ${X}^{k}=\underset{X\in {\mathrm{\Omega }}_{k}}{\text{argmin}}LB\left(X\right),$ then return to Step 3.

#### Theorem 4.1

The proposed algorithm either terminates within finite iterations with an optimal solution for the MFP is found, or generates an infinite sequence of iterations such that along any infinite branches of the branch-and-bound tree, any accumulation point of the sequence {xk} will be the global optimal solution of the (MFP).

#### Proof

(1) If the proposed algorithm is finite, it will terminate in some iteration k, k ≥ 0. And it can be known, by the termination criteria, that $v¯−v_≤ϵ.$

From Step 0 and Step 3, it implies that $f(xk)−LBk≤ϵ.$

Let vopt be the optimal value of the MFP problem. Then by section 3 and section 4.1, we known that $f(xk)≥vopt≥LBk.$

Hence, taken together, it implies that $vopt+ϵ≥LBk+ϵ≥f(xk)≥vopt.$

And thus the proof of part (1) is completed.

(2) If the algorithm is infinite and generates an infinite feasible solution sequence $\left\{\left({x}^{k},\phantom{\rule{thinmathspace}{0ex}}{x}_{n+1}^{k}\right)\right\}$ via solving the (RMFP). Let ${x}_{n+1}^{k}=f\left({x}^{k}\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{then}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left\{\left({x}^{k},\phantom{\rule{thinmathspace}{0ex}}{x}_{n+1}^{k}\right)\right\}$ is a feasible solution sequence for the (EMFP). Since the sequence {xk} is bounded, it must have accumulations, without loss of generality, assume $\underset{k\to \mathrm{\infty }}{lim}{x}^{k}={x}^{\ast }.$ On the other hand, by the continuity of ni(x) and di(x), we can get $limk→∞xn+1k=limk→∞f(xk)=f(x∗).$(7)

Also, according to the branching regulation described before, we can see that $limk→∞ln+1k=limk→∞un+1k=xn+1∗.$(8)

What’ more, note that ${l}_{n+1}^{k}\le f\left({x}^{k}\right)\le {u}_{n+1}^{k},$ taken (7) and (8) together, we can come to the conclusion that $xn+1∗=limk→∞f(xk)=f(x∗)=limk→∞x¯n+1k.$

Therefore $\left({x}^{\ast },\phantom{\rule{thinmathspace}{0ex}}{x}_{n+1}^{\ast }\right)$ is also a feasible solution for the (EMFP). Further more, since the lower bound sequence LBk for the optimal value is increasing and lower bounded by the optimal value vopt in the algorithm, combining the continuity of xn+1, we have $limk→∞LBk=xn+1∗≤vopt≤limk→∞xn+1k=xn+1∗.$

That is, $\left({x}^{\ast },\phantom{\rule{thinmathspace}{0ex}}{x}_{n+1}^{\ast }\right)$ is an optimal solution for the (EMFP), and of course x* is an optimal solution for the (MFP) according to the equivalence of the problems (MFP) and (EMFP), the proof is completed.□

## 5 Numerical experiments and results

The purpose of this section is to demonstrate the computational performance of our algorithm for the MFP problem. To this end, some numerical experiments in recent works have been carried out on a personal computer containing an Intel Core i5 processor of 2.40 GHz and 4GB of RAM. The code base is written in Matlab2014a and interfaces LINPROG for the linear relaxation problems. The comparison results are listed in Table 1. In addition, to assess the feasibility and stability of the proposed algorithm, a randomly generated test problem is given at the end of this part and the randomised experiment results are demonstrated in Table 2, Table 3, and Fig. 2, Fig. 3. Throughout this section, all problems are solved with relative/absolute optimality tolerance of 1 × 10−6, and we use Iter, Nods and Time to denote the number of iterations, maximum number of nodes stored in memory and the CPU time required in the solving process, respectively.

Figure 2

Result of random experiment 8 comparison with Ref. [13].

Figure 3

Result of random experiment 8 comparison with Ref. [15].

Table 1

Results of the numerical contrast experiments 1-7.

Table 2

Random numerical experiment 8 comparison with Ref. [13].

Table 3

Random numerical experiment 8 comparison with Ref. [15].

#### Example 1

([14]). $minmax{3x1+x2−2x3+0.82x1−x2+x3,4x1−2x2+x37x1+3x2−x3,3x1+2x2−x3+1.9x1−x2+x3,4x1−x2+x38x1+4x2−x3}s.t.x1+x2−x3≤1,−x1+x2−x3≤−1,12x1+5x2+12x3≤34.8,12x1+12x2+7x3≤29.1,−6x1+x2+x3≤−4.1,1.0≤x1≤1.2,0.55≤x2≤0.65,1.35≤x3≤1.45.$

#### Example 2

([15]). $minmax{2.1x1+2.2x2−x3+0.81.1x1−x2+1.2x3,3.1x1−x2+1.3x38.2x1+4.1x2−x3}s.t.x1+x2−x3≤1,−x1+x2−x3≤−1,12x1+5x2+12x3≤40,12x1+12x2+7x3≤50,−6x1+x2+x3≤−2,1.0≤x1≤1.2,0.55≤x2≤0.65,1.35≤x3≤1.45.$

#### Example 3

([13, 15]). $minmax{3x1+4x2−x3+0.52x1−x2+x3+0.5,3x1−x2+3x3+0..59x1+5x2−x3+05,4x1−x2+5x3+0.511x1+6x2−x3,5x1−x2+6x3+0.512x1+7x2−x3+0.9}s.t.x1+x2−x3≤1,−X1+x2−x3≤−1,12x1+5x2+12x3≤42,12x1+12x2+7x3≤55,−6X1+x2+x3≤−3,1.0≤x1≤2,0.50≤x2≤2,0.5≤x3≤2.$

#### Example 4

([13, 15]). $minmax{3x1+4x2−x3+0.92x1−x2+x3+0.5,3x1−x2+3x3+0..59x1+5x2−x3+05,4x1−x2+5x3+0.511x1+6x2−x3+0.9,5x1−x2+6x3+0.512x1+7x2−x3+0.9,6x1−x2+7x3+0.611x1+6x2−x3+0.9}s.t.2x1+x2−x3≤2,−2x1+x2−2x3≤−1,11x1+6x2+12x3≤45,11x1+13x2+6x3≤52,−7x1+x2+x3≤−2,1.0≤x1≤2,0.35≤x2≤0.9,1.0≤x3≤1.55.$

#### Example 5

([13, 15]). $minmax{5x1+4x2−x3+0.93x1−x2+2x3+0.5,3x1−x2+4x3+0.59x1+3x2−x3+0.5,4x1−x2+6x3+0.512x1+7x2−x3+0.9,7x1−x2+7x3+0.511x1+9x2−x3+0.9,7x1−x2+7x3+0.711x1+7x2−x3+0.9}s.t.2x1+2x2−x3≤3,−2x1+x2−3x3≤−1,11x1+7x2+12x3≤47,13x1+13x2+6x3≤56,−6x1+2x2+3x3≤−1,1.0≤x1≤2,0.35≤x2≤0.9,1.0≤x3≤1.55.$

#### Example 6

([13, 15, 18]). $minmax{37x1+73x2+1313x1+13x2+13,63x1−18x2+3913x1+26x2+13}s.t.5x1−3x2=3,1.5≤x1≤3.$

#### Example 7

$minmax{2x12+x22+2x1x2−3x12−2x22+4x1x2+14,x12+x1x2+x22+1−x12+5}s.t.x12+x22≤4,x1+x2≥1,$

#### Example 8

$minmax{2x12+3x226−x22,3x12+x22+4x1x2+0.5−9x12+43}s.t.2x1+3x2≤5,−2x1+x2≤−1,1≤x1≤2,0.35≤x2≤0.9.$

#### Example 9

([13]). $minmax{∑i=1Nn1ixi+n¯1∑i=1Nd1ixi+d¯1,∑i−−1Nn2ixi+n¯2∑i=1Nd2ixi+d¯2,⋯,∑i=1Nnpixi+n¯p∑i=1Ndpixi+d¯p}s.t.Ax≤b,0≤xi≤3,i=1,2,⋯,N.$

In Table 1, we provide the comparative results of our algorithm with some other methods identified in the second column of the table. It can be seen that our algorithm performs much better than other algorithms with respect to efficiency.

In Table 2, the results of randomized test compared with reference [13] are reported. Also we demonstrated the comparative results with a double y-axes broken line graph in Figure 2. From table 2 and Figure 2, it easy to see that our method is much more stable and effective than Jiaos method.

In Table 3, the results of randomized test compared with reference [15] are reported. Also we demonstrated the comparative results with a double y-axes broken line graph in Figure 3. From table 3 and Figure 3, it easy to see that our method performed significantly better than Fengs method in [15].

## 6 Concluding remarks

In this paper, we described an efficient implementation of unidimensional outcome branch and bound algorithm for global optimization of the MFP problem. Instead of using linearization or concave envelope technique, we constructed the concise convex relaxation programming by simple arithmetic operations. This relaxation method is more practical and available for convex-concave ratios problem which generalized the linear case appeared in various works. Moreover, unidimensional adapted interval partition and reduction techniques are incorporated into the algorithm scheme, which can significantly improve the performance of the algorithm. Global convergence is proved and the feasibility and robustness of the algorithm is illustrated by numerical experiments.

## Acknowledgement

The authors would like to express their sincere thanks to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of our paper.

## References

• [1]

Ahmad I., Husain Z., Duality in nondifferentiable minimax fractional programming with generalized convexity, Appl. Math. Comput.,2006, 176, 545-551 Google Scholar

• [2]

Damaneh M., On fractional programming problems with absolute-value functions, Int. J. Comput. Math., 2011, 88(4), 661-664

• [3]

Zhu J., Liu H., Hao B., A new semismooth newton method for NCPs based on the penalized KK function, Int.J. Comput. Math.,2012, 89(4),543-560

• [4]

Jiao H., Liu S., A practicable branch and bound algorithm for sum of linear ratios problem, Euro.J.Oper.Res.,2015, 243,723-730

• [5]

Barrodale I., Best rational approximation and strict quasiconvexity, SIAM J. Numer. Anal.,1973, 10, 8-12

• [6]

Leber M., Kaderali L., A. Schnhuth, R. Schrader, A fractional programming approach to efficient DNA melting temperature calculation, Bioinformatics,2005, 21(10),2375-2382

• [7]

Fasakhodi A., Nouri S., Amini M., Water resources sustainability and optimal cropping pattern in farming systems:a multi-objective fractional goal programming approach, Water Res. Manag.,2010, 24,4639-4657

• [8]

Balasubramaniam P., Lakshmanan S., Delay-interval-dependent robust-stability criteria for neutral stochastic neural networks with polytopic and linear fractional uncertainties, Int. J. Comput. Math.,2011, 88(10), 2001-2015

• [9]

Goedhart M., Spronk J., Financial planning with fractional, goals. Eur. J. Oper. Res.,1995,82(1),111-124

• [10]

Ding F., Coupled-least-squares identification for multivariable systems, IET Control Theory Appl.,2013, 7(1),68-79

• [11]

Ding F., Hierarchical multi-innovation stochastic gradient algorithm for hammerstein nonlinear system modeling, Appl. Math. Model.,2013, 37,1694-1704

• [12]

Liu Y., Ding R., Consistency of the extended gradient identification algorithm for multi-input multi-output systems with moving average noises, Int. J. Comput. Math., 2013,90(9),1840-1852

• [13]

Jiao H., Liu S., A new linearization technique for minimax linear fractional programming, Int. J. Comput. Math.,2014, 91(8), 1730- 1743

• [14]

Feng Q., Jiao H., Mao H., Chen Y., A Deterministic Algorithm for Min-max and Max-min Linear Fractional Programming Problems, Int. J. Comput. Intell. Syst.,2011,4,134-141

• [15]

Feng Q., Mao H., Jiao H., A feasible method for a class of mathematical problems in manufacturing system, Key Eng.Mater.,2011,460-461,806-809

• [16]

Crouzeix J., Ferland J., Schaible S., An algorithm for generalized fractional programs, J. Optim. Theory Appl.1985, 47,135-149 Google Scholar

• [17]

Freund R., Jarre F., An interior-point method for fractional programs with convex constraints, Math. Program.,1994, 67,407-440

• [18]

Phuong N., Tuy H., A unified monotonic approach to generalized linear fractional programming, J. Global Optim.,2003. 26,229-259

• [19]

Borde J., Crouzeix J., Convergence of a dinkelbach-type algorithm in generalized fractional programming, Z. Oper. Res.,1987, 31(1), A31-A54 Google Scholar

• [20]

Lin J., Sheu R., Modified dinkelbach-type algorithm for generalized fractional programs with infinitely many ratios, J. Optim. Theory Appl.,2005,126(2), 323-343

• [21]

Jeyakumar V., Li G., Srisatkunarajah S., Strong duality for robust minimax fractional programming problems, Eur. J. Oper. Res.,2013, 228, 331-336

• [22]

Lai H., Huang T., Optimality conditions for nondifferentiable minimax fractional programming with complex variables, J. Math. Anal. Appl.,2009,359, 229-239

Accepted: 2017-04-03

Published Online: 2017-06-22

Citation Information: Open Mathematics, Volume 15, Issue 1, Pages 840–851, ISSN (Online) 2391-5455,

Export Citation