Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

IMPACT FACTOR 2017: 0.831
5-year IMPACT FACTOR: 0.836

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

# A reduced space branch and bound algorithm for a class of sum of ratios problems

Yingfeng Zhao
• Corresponding author
• School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China
• Email
• Other articles by this author:
/ Ting Zhao
• School of Mathematical Science, Henan Institute of Science and Technology, Xinxiang 453003, China
• Other articles by this author:
Published Online: 2018-05-30 | DOI: https://doi.org/10.1515/math-2018-0049

## Abstract

Sum of ratios problem occurs frequently in various areas of engineering practice and management science, but most solution methods for this kind of problem are often designed for determining local solutions . In this paper, we develop a reduced space branch and bound algorithm for globally solving sum of convex-concave ratios problem. By introducing some auxiliary variables, the initial problem is converted into an equivalent problem where the objective function is linear. Then the convex relaxation problem of the equivalent problem is established by relaxing auxiliary variables only in the outcome space. By integrating some acceleration and reduction techniques into branch and bound scheme, the presented global optimization algorithm is developed for solving these kind of problems. Convergence and optimality of the algorithm are presented and numerical examples taken from some recent literature and MINLPLib are carried out to validate the performance of the proposed algorithm.

MSC 2010: 90C26; 90C30

## 1 Introduction

Fractional programming occurs frequently in a variety of economic, industrial and engineering problems [1]. It is one of the most topical and useful fields in nonconvex optimization and many intensive and systematical researches have been done on fractional programming since the seminal works by Charnes and Cooper [2, 3]. Sum of ratios problem SRP is a special case of optimization among fractional programming [4,5], as a generalization of linear fractional programming which optimizes sum of linear ratios, the SRP has a broad range of applications. Included among those, are clustering problems [6], transportation planning [7], multi-stage stochastic shipping [8], finance and investment [9], and layered manufacturing problems [10, 11], to name but a few. The reader is referred to a survey [12] and a bibliography [13] to find many other applications. In this paper, we focus on the following sum of ratios problem:

$SRP:minf(x)=∑i=1pδiϕi(x)ψi(x)s.t.hj(x)≤0,j=1,2,…,mx∈X=[x_,x¯],$

where each coefficient δi is a real number, functions ϕi(x), −ψi(x), i = 1, 2, …, p, hj(x), j = 1, 2, …, m are all convex (hence continuous). Furthermore, we assume that ψi(x) ≠ 0, ∀ xX. By the continuity of the denominators, we know that ψi(x) must satisfy ψi(x) > 0 or ψi(x) < 0. Based on the discussion in [14] and for the sake of simplicity, we only need to consider a special case of the SRP, that is, the numerator and the denominator of each ratio in the objective function of the SRP satisfies the following condition:

$ϕi(x)≥0,ψi(x)>0,∀x∈X,i=1,2,⋯,p.$(1)

The SRP has attracted the interest of quite a lot of researchers and practitioners for many years which is at least in part due to the difficulty associated with the existence of multiple local solution that are not optimally global. Actually, Charnes and Cooper have proved that the optimization of a single linear ratio is equivalent to a linear program and hence it can be solved in polynomial time [2, 15]. But this is not true for the SRP where the objective function is a sum of p (p ≥ 2) nonlinear (even linear) ratios, due to some inherent difficulties, there are many theoretical and computational challenge for finding the global optimizer of the SRP. During the past several years, some feasible algorithms have already been proposed for the SRP and its special forms, for instance, Konno et al. presented a parametric simplex method and an efficient heuristic algorithm for globally solving the sum of linear fractional problems and its special case [17, 18], but their algorithm can only solve sum of linear ratios, and the problem must have three ratios. Falk and Palocasy put forward an approach based on the image space analysis for globally solving sum of affine ratios problem [19], wherein they identify classes of nonconvex problems involving either sums or products of ratios of linear terms which may be treated by analysis in a transformed space. In each class, the image space is defined by a mapping which associates a new variable with each original ratio of linear terms. In the image space, optimization is easy in certain directions, and the overall solution may be realized by sequentially optimizing in these directions, this algorithm has good performance, but the problem they considered can only have linear constraints; Pei and Zhu present a branch and bound algorithm by converting it into a D.C programming [20], their algorithm performs well when the number of variables is not so big. In addition to this, Shen and Wang developed two kinds of branch-reduction-bound algorithms for sum of linear ratios problem [21, 22], both of these branch and bound algorithms branch in the variable space, with the increase of the number of variables, the performance of the algorithm will decline sharply; Jiao and Liu presented a practical outcome space branch and bound algorithm for globally maximizing sum of linear ratios problem [14], this algorithm can effectively solve the sum of linear ratios problem with quite a lot of variables, but the branch operation occurs in the outcome space of the reciprocal of the denominator. Despite these various contributions, however, there is still no decisive method for globally solving general sum of ratios problem and thus efficient solution method for SRP is still an open issue.

In this study, we will present a reduced space branch and bound algorithm with practical accelerating techniques according to some properties (concavity, convexity and continuity) of the objective and constraint functions in SRP. The attractive properties of this algorithm is mainly embodied in the following three aspects. First, the problem we considered is more general and extensive than that in most of the above literatures . Second, the relaxation operation we used is quite concise and practical, the adapted subdivision and range reduction technique carried out in the outcome space can sharply reduce the number of nodes in the branching tree so as the execution efficiency of the algorithm is significantly improved. Finally, the global convergence property is proved and some numerical experiment and a random test is performed to illustrate the feasibility and robust property.

The remainder of this paper is organized in the following way. The next section shows how to construct the equivalent problem EP and the convex relaxation programming of the EP according to the concavity and convexity of the objective and constraint functions in SRP. The condensing, branching and bounding operations of the new algorithm are established in Section 3. The detailed statement and the global convergence property of the presented algorithm is put forward in Section 4. Section 5 is devoted to a report of computational comparison between our algorithm and some of the other algorithms that exist in the literature. Some concluding remarks are proposed in the last section.

## 2 Equivalent problem and relaxation programming

In this section, we will first transform the problem SRP into an equivalent problemEP by associating each ratio in the objective function in SRP with an additional variable which we call the outcome variable, and then our focus will be shifted to find the global optimal solution of the EP. By utilizing the special structure of the EP, a concise convex relaxation programming for the EP will be introduced with which we only need to branch in a reduced outcome space and at the same time, a new upper bound and lower bound of the optimal value will be obtained simultaneously at each iteration, so as to greatly reduce the workload of the calculation.

## 2.1 Equivalent problem

To solve the problem, we will first transform the SRP into an equivalent problem EP, where the objective function is linear and the constraint functions possess a special structure which is beneficial for constructing convex relaxing programming problems. To explain how such a reformulation is possible, we first introduce p auxiliary variables ti, i = 1, 2 ⋯, p, and for definiteness and without loss of generality, we assume that

1. δi > 0, i = 1, 2 ⋯, p, when ϕi(x) is convex and ψi(x) is concave, i = 1, 2 ⋯, p.

2. δi > 0, i = 1, 2 ⋯, T; δi < 0, i = T + 1, T + 2 ⋯, p, when ϕi(x) and ψi(x) are linear, i = 1, 2 ⋯, p.

Then denote

$di=minx∈Xψi(x);li0=minx∈Xϕi(x)maxx∈Xψi(x),ui0=maxx∈Xϕi(x)minx∈Xψi(x),i=1,2,⋯,p,$

note that we can obtain the values of di, $\begin{array}{}{l}_{i}^{0}\end{array}$ and $\begin{array}{}{u}_{i}^{0}\end{array}$ easily by utilizing the convexity and concavity of the numerators and denominators and clearly we know that 0 $\begin{array}{}{l}_{i}^{0}\end{array}$$\begin{array}{}{u}_{i}^{0}\end{array}$.

Next, we consider the following equivalent problem EP:

$EP:ming(t)=∑i=1pδitis.t.ci(t)=ϕi(x)−ti(ψi(x)−di)+tidi≤0,i=1,2,…,T,ci(t)=ϕi(x)−ti(ψi(x)−di)+tidi≥0,i=T+1,T+2,…,p,hj(x)≤0,j=1,2,…,m,x∈X,t∈D0.$

where $\begin{array}{}{D}^{0}=\left\{t\in {R}^{p}\mid {l}_{i}^{0}\le {t}_{i}\le {u}_{i}^{0},i=1,2,\dots ,p\right\}\end{array}$ is called an outcome space corresponding to the feasible region of SRP, and soon we will show that problems SRP and EP are equivalent in the sense of the following theorem.

#### Theorem 2.1

xRn is a global optimal solution for the SRP if and only if (x, t) ∈ Rn+p is a global optimal solution for the EP, where $\begin{array}{}{t}_{i}^{\ast }=\frac{{\varphi }_{i}\left({x}^{\ast }\right)}{{\psi }_{i}\left({x}^{\ast }\right)},i=1,2,\dots ,p.\end{array}$.

#### Proof

Assume xRn is a global optimal solution for the SRP, let $\begin{array}{}{t}_{i}^{\ast }=\frac{{\varphi }_{i}\left({x}^{\ast }\right)}{{\psi }_{i}\left({x}^{\ast }\right)},i=1,2,\dots ,p.\end{array}$, then we have (x, t) ∈ Rn+p is a feasible solution for the EP, according the optimality of x in the SRP, we know that, for each xF

$f(x)=∑i=1pδiϕi(x)ψi(x)≥f(x∗)=∑i=1pδiϕi(x∗)ψi(x∗)=∑i=1pδiti∗=g(t∗).$(2)

In addition, if (x, t) is a feasible solution to the EP, we can obtain

$g(t)=∑i=1Tδiti+∑i=T+1pδiti≥∑i=1Tδiϕi(x)ψi(x)+∑i=T+1pδiϕi(x)ψi(x)=f(x).$(3)

From the above conclusion (2) and (3), we know that (x, t) ∈ Rn+p is a global optimal solution for the EP. Conversely, if (x, t) ∈ Rn+p is a global optimal solution for the EP, we first prove that

$ti∗=ϕi(x∗)ψi(x∗),i=1,2,…,p.$

If otherwise, according the feasibility of (x, t), one of the following two conclusions

$ti∗>ϕi(x∗)ψi(x∗),for somei∈{1,2,⋯,T},tj∗≥ϕj(x∗)ψj(x∗),for allj≠i$

or

$ti∗<ϕi(x∗)ψi(x∗),for somei∈{T+1,T+2,⋯,p},tj∗≤ϕj(x∗)ψj(x∗), for allj≠i$

must hold. Let $\begin{array}{}{\overline{t}}_{i}=\frac{{\varphi }_{i}\left({x}^{\ast }\right)}{{\psi }_{i}\left({x}^{\ast }\right)},i=1,2,\dots ,p,\end{array}$ then we know (x, ) is a feasible solution and $\begin{array}{}g\left(\overline{t}\right)=\sum _{i=1}^{T}{\delta }_{i}{\overline{t}}_{i}+\sum _{i=T+1}^{p}{\delta }_{i}{\overline{t}}_{i}<\sum _{i=1}^{T}{\delta }_{i}{t}_{i}^{\ast }+\sum _{i=T+1}^{p}{\delta }_{i}{t}_{i}^{\ast }=g\left({t}^{\ast }\right),\end{array}$ it is a contradiction to the optimality of (x, t), hence we have

$ti∗=ϕi(x∗)ψi(x∗),i=1,2,…,p.$

Furthermore, ∀ xF, let $\begin{array}{}{t}_{i}=\frac{{\varphi }_{i}\left(x\right)}{{\psi }_{i}\left(x\right)},i=1,2,\dots ,p,\end{array}$ then we have (x, t) ∈ Rn+p is a feasible solution for the EP, and by the optimality of (x, t), we have

$f(x∗)=∑i=1pδiϕi(x∗)ψi(x∗)=∑i=1pδiti∗≤∑i=1pδiti=∑i=1pδiϕi(x)ψi(x)=f(x),$

that is to say, x is the global optimal solution of SRP, and this completes the proof.□

For solving problem SRP, according to the conclusion of Theorem 2.1, we only need to consider how to solve problem EP. To this end, we will make full use of the structure of the EP to establish the convex or linear relaxation problem of the EP for designing the presented algorithm. To keep things simple, we only consider the linear situation (case ii)), it can be easily extended to the nonlinear circumstances that satisfy condition i).

## 2.2 Relaxation technique

In this part, we concentrate on how to construct the linear relaxation programming problem of the EP on assumption that all functions appeared in the SRP are linear. Note that the objective function of problem EP is already a linear function, so we only need to consider the bilinear constraints. For simplicity, we denote D as the rectangle region generated by the branching operation, where D = D1 × D2 × ⋯ × Dp with Di = {tiR|litiui}, and F is a subset of the feasible region which appears in the branch operation, then we can put forward an approach for generating a linear underestimating function of the constraint function for problem EP, which is given by the following Theorem 2.1.

#### Theorem 2.2

For any (x, t) ∈ F × D, denote

$c~i(t)=(ni)Tx−∑j=1nθijdjixj+βi+tiγi,i=1,2,⋯,p,$(4)

where

$θij=ui,dj≥0li,dj<0,i=1,2⋯T;θij=li,dj≥0ui,dj<0,i=T+1,T+2⋯p,$

then we have

1. The function i(t) is a lower bounding function for ci(t) over the region F × D.

2. Function i(t) will approximate each ci(t), (i = 1, 2, ⋯, p) as |uili| → 0 that is |i(t) − ci(t)| → 0, as |uili| → 0.

#### Proof

1. For any (x, t) ∈ F × D, by the definition of ci(t) and i(t), we have

$ci(t)=ϕi(x)−tiψi(x)=(ni)Tx+βi−ti(dix−γi)=(ni)Tx−∑j=1ntidjixj+tiγi+βi≥(ni)Tx−∑j=1nθijdjixj+βi+tiγi=c~i(t).$

So the conclusion (i) holds.

2. For any (x, t) ∈ F × D, by the definition of ci(t) and i(t), we have

$|c~i(t)−ci(t)|=|nix−∑j=1ntidjixj+tiγi+βi−(nix−∑j=1nθijdjixj+βi+tiγi)|=|∑j=1nθijdjixj−∑j=1ntidjixj|=|∑j=1n(θij−ti)djixj|≤|∑j=1ndjixj||ui−li|≤Mi|ui−li|.$(5)

Where Mi is an upper bound of | $\begin{array}{}\sum _{j=1}^{n}{d}_{j}^{i}{x}_{j}\end{array}$ |, it existence can be easily obtained by the continuity of affine function $\begin{array}{}\sum _{j=1}^{n}{d}_{j}^{i}{x}_{j}\end{array}$ over a compact region. From conclusion (5), it is easy to see that

$|c~i(t)−ci(t)|→0,as|ui−li|→0.$

Thus the proof is complete.□

Therefore, according to the above discussion, we can obtain the linear relaxation programming problem REPD corresponding to the outcome space D of EPD as follows:

$REPD:ming(t)=∑i=1pδitis.t.−c~i(t)≥0,i=1,2,…,T,c~i(t)≥0,i=T+1,T+2,⋯,p,x∈F,t∈D,$

where i(t) is defined by (5), and from now on, we will use the symbol EPD to express the problem EP corresponding to the outcome space D, and in the rest of this paper, any symbol similar to this should be understood in the same meaning.

Based on the construct process of the REPD, it is not hard to find that every feasible solution for EPD is also a feasible solution of the REPD, but its optimal value is not less than that of the REPD, thus the REPD can provide a valid lower bound for the optimal value of problem EPD and problem REPD will approximate the EPD as $\begin{array}{}\underset{1\le i\le p}{max}|{u}_{i}-{l}_{i}|\to 0\end{array}$ which is indicated by Theorem 2.2

## 3 Key operations for algorithm design

To present the reduced space branch and bound algorithm for solving the SRP, we will describe three fundamental operations: branching, condensing and bounding, in this section.

## 3.1 Branching operation

In this paper, we adopt the so-called adapted partition technique to subdivide the initial box D0 into sub-boxes. The adapted partition operation performs in a reduced outcome space associated with problem EP other than n-dimensional variable space, this is the place where is different from the general branch and bound algorithm performed in variable space. For any subset D = {tRp|litiui} ⊂ D0, the specific division procedure is given as follows.

Partition regulation

1. Let r ∈ argmax {uili|i = 1, 2, ⋯, p}.

2. Let θr = (1 − α)lr + αur.

3. Subdivide Dr into two intervals $\begin{array}{}{D}_{r}^{1}\end{array}$ and $\begin{array}{}{D}_{r}^{2}\end{array}$ , where $\begin{array}{}{D}_{r}^{1}\end{array}$ = [lr, θr] and $\begin{array}{}{D}_{r}^{2}\end{array}$ = [θr, ur], then let

$D′=D1×D2×⋯×Dr−1×Dr1×Dr+1×⋯×Dp,$

and

$D″=D1×D2×⋯×Dr−1×Dr2×Dr+1×⋯×Dp.$

Thus, region D is divided into two new hyper-rectangles D′ and D″.

It can be seen from the above partition regulation that only the p-dimensional outcome space is partitioned in the algorithm, the n-dimensional variable space was never divided, this is just the place where our algorithm is different from the usual branch and bound algorithm, and immediately, we will see that this operation will make the algorithm quite efficient for special scaled problem where the number of the ratios in the objective function is far less than that of the variables.

## 3.2 Condensing and bounding technique

For any rectangle DkD0 generated by the branching operation in the k-th iteration, the condensing operation consists in reducing the current partition still of interest by incising the part which does not contain the global optimal solution for problem SARPD0; The bounding operation aims at estimating an upper and (or) lower bound of the optimal objective value of the EP and removing the subregion which doesn’t have further research value.

In the k-th iteration, first, we solve the linear relaxation programming problem REPDk, assume the optimal solution is (k, k), then let $\begin{array}{}{\stackrel{~}{t}}_{i}^{k}=\frac{{\varphi }_{i}\left({\overline{x}}^{k}\right)}{{\psi }_{i}\left({\overline{x}}^{k}\right)},\end{array}$ we can obtain a feasible solution (k, k) for problem (EPD0), and of course, the objective value of (k, k) is an upper bound for the optimal value of the (EPD0); Further more, the optimal value of the REPDk is a lower bound of the objective value of (EPDk), and the smallest optimal value of all subproblem in the k-th iteration is a lower bound for the optimal value of the (EPD0). Assume that is the best upper bound of the optimum of the REPD0 known so far, then the condensing technique can be described in the form of the following theorem:

#### Theorem 3.1

For any sub-region DkD0, assume $\begin{array}{}{f}_{min}^{k}\end{array}$ is the optimal value of problem REPDk, then the following two conclusions hold:

1. If $\begin{array}{}{f}_{min}^{k}\end{array}$ > , then Dk doesn’t contain the optimal solution of problem (EPD0), so it can be removed.

2. If $\begin{array}{}{f}_{min}^{k}\end{array}$, then for each j ∈ {1, 2, ⋯, T}, the region kDk can be incised; For each j ∈ {T + 1, T + 2, ⋯, p}, the region D̿kDk can be incised, where

$D¯k=D1k×D2k×⋯×Dj−1k×D¯jk×Dj+1k×⋯×DTk×DT+1k×⋯×Dpk,$

and

$D¯¯k=D1k×D2k×⋯×DTk×DT+1k×⋯×Dj−1k×D¯¯jk×Dj+1k×⋯×Dpk,$

with

$rjk=1δjf¯−fmink+ujk,j=1,2,⋯,T,1δjf¯−fmink+ljk,j=T+1,T+2,⋯,p,$

and

$D¯jk=[ljk,rjk]∩Djk,D¯¯jk=[rjk,ujk]∩Djk.$

#### Proof

1. The conclusion is obvious, here is omitted.

2. For simplicity’s sake, we denote

$M¯k=(x,t)∈F×D¯kc~i(t)≤0,i=1,2,…,Tc~i(t)≥0,i=T+1,T+2,⋯,p,$

and

$M¯¯k=(x,t)∈F×D¯¯kc~i(t)≤0,i=1,2,…,Tc~i(t)≥0,i=T+1,T+2,⋯,p,$

Since $\begin{array}{}{f}_{min}^{k}\end{array}$, then for each j ∈ {1, 2, ⋯, T}, and (x, t) ∈ k, we have

$min(x,t)∈M¯k⁡∑i=1pδiti≥fmink+min(x,t)∈M¯kδjtj−min(x,t)∈M¯kδjtj>fmink+δjrjk−δjujk=f¯,$

therefore, k doesn’t contain the global optimal solution for the (EPD0), and it can be incised.

In the same way, when j ∈ {T + 1, T + 2, ⋯, p}, we have

$min(x,t)∈M¯k⁡∑i=1pδiti≥fmink+min(x,t)∈M¯¯kδjtj−min(x,t)∈M¯¯kδjtj>fmink+δjrjk−δjljk=f¯,$

similarly, D̿k doesn’t contain the global optimal solution of the (EPD0), and it will be incised in the algorithm. □

By Theorem 3.1, the condensing operation can cut away a large part of current region in which the optimal solution doesn’t exist, so the rapid growth of the branching node can be suppressed from iteration to iteration. Additionally, unlike a normal branch and bound algorithm, the branching method used in this study can adjust the ratio of the partitions measurement by adopting different ratios, and thus the convergent speed of the algorithm can be enhanced.

## 4 Algorithm statement and convergence analysis

Based upon the above results and technique, the basic steps of the reduced space branch and bound algorithm associated with efficient accelerating techniques for globally solving the SRP will be summarized in this section.

## 4.1 Algorithm statement

By integrating the condensing technique and partition skills into the reduced space branch and bound scheme, the presented algorithm for the SRP can be described as follows.

• Step 0 (Initialization)

Set the convergence precision ϵ ≥ 0, iteration counter k = 0 and the partition ratio α ∈ [0, 1]. Compute the values of $\begin{array}{}{l}_{i}^{0},{u}_{i}^{0},\end{array}$ for each i = 1, 2, ⋯, p, then detemine the optimal solution (x0, t0) and optimal value fmin by solving the linear relaxation programming problem REPD0. Let

$f_=fmin,ti∗=ϕi(x∗)ψi(x∗),i=1,2,⋯,p,x∗=x0,$

clearly, (x*, t*) is a feasible solution for the (EPD0). Let f = g(t*) $\begin{array}{}=\sum _{i=1}^{p}{\delta }_{i}{t}_{i}^{\ast },\end{array}$ if ffϵ, stop, and x*, f are the optimal solution and the optimal vale of the SRP, respectively; otherwise, set F = ϕ, k = 1,D1 = D0, the set of all partitions still of interest Θk = {D1}, then turn to step 1.

• Step 1 (Condensing)

For each rectangle DkD, incising the invalid part by the condensing technique described in section 3.2, substitute Dk with the remaining partition.

• Step 2 (Branching)

Subdivide region Dk into two new regions Dk1 and Dk2 according to the ratio partition rule, express the collection of new partitions as k.

• Step 3 (Bounding)

Obtain optimal solutions (x, t) and optimal values $\begin{array}{}{f}_{max}^{k\nu }\end{array}$, with ν ∈ {1, 2}, respectively, by solving problems REPDk. Then let

$t¯i∗ν=ϕi(xkν)ψi(xkν),i=1,2,⋯,p,ν∈{1,2},$

and update the upper bound by setting f = min{f, g(t*ν)}, and let x* be the feasible solution with the best objective value currently known. If $\begin{array}{}{f}_{min}^{k\nu }\end{array}$ > f, delete the node associated with D from Θk, if Θk = ϕ, stop, x*, f are the optimal solution and the optimal vale of the SRP, respectively; else, update the lower bound by setting fmin = $\begin{array}{}min\left\{{f}_{min}^{k\nu }\right\}.\end{array}$

• Step 4 (Optimality test)

If ffϵ, the algorithm can stop, at the same time, we can conclude that x* and f are the ϵ−global minimizer and minimum for the SRP, respectively. Otherwise, let k = k + 1 and return to step 1.

## 4.2 Convergence analysis

In this section, we illustrate the convergence property of the algorithm by the following theorem.

#### Theorem 4.1

The reduced space branch and bound algorithm described above either terminates within finitely many iterations and yield an ϵglobal solution of the SRP, or generates an infinite sequence of feasible solutions with an accumulation as an ϵglobal solution for the SRP.

#### Proof

If the algorithm terminates at the k-th iteration, upon termination criteria, it follows that ffϵ. From step 0 and step 3 in the algorithm, a feasible solution xk can be found to satisfy the following relation f(xk) − fϵ. At the same time, we have ffoptf(xk), where fopt is the optimal value for the (EPD0). Thus, taken the above relations together, it implies that

$fopt≤f(xk)≤f_+ϵ≤fopt+ϵ,$

we can conclude that xk is the optimal solution for the (EPD0), and of course also for the SRP.

If the algorithm is infinite and via solving the REPDk, generates an infinite feasible solution sequence {(xk, tk)}. Let $\begin{array}{}{\overline{t}}_{i}^{k}=\frac{{\varphi }_{i}\left({x}^{k}\right)}{{\psi }_{i}\left({x}^{k}\right)},\end{array}$ then {(xk, tk)} is a feasible solution sequence for the (EPD0). Since the sequence {xk} is bounded, it must have accumulations assuming $\begin{array}{}\underset{k\to \mathrm{\infty }}{lim}\end{array}$ xk = x* without loss of generality. On the other hand, we get

$limk→∞⁡ti¯k=limk→∞⁡ϕi(xk)ψi(xk)=ϕi(x∗)ψi(x∗),$(6)

by the continuity of ϕi(x) and ψi(x).

Also, according to the branching regulation described before, we know that

$limk→∞⁡lik=limk→∞⁡uik=ti∗,$(7)

what’more, note that $\begin{array}{}{l}_{i}^{k}\le \frac{{\varphi }_{i}\left({x}^{k}\right)}{{\psi }_{i}\left({x}^{k}\right)}\le {u}_{i}^{k},\end{array}$ we conclude that

$ti∗=limk→∞⁡ϕi(xk)ψi(xk)=ϕi(x∗)ψi(x∗)=limk→∞⁡t¯k$

by (6) and (7), therefore (x*, t*) is also a feasible solution for the (EPD0). Further more, since the lower bound sequence fk for the optimal value is increasing and lower bounded by the optimal value fopt in the algorithm, so combining the continuity of g(t), we have

$limk→∞⁡f_k=g(t∗)≤fopt≤limk→∞⁡g(t¯k)=g(t∗).$

That is, (x*, t*) is an optimal solution for the (EPD0), and of course x* is an optimal solution for the SRP according to the equality of problems SRP and (EPD0), therefore completing the proof. □

## 5 Numerical experiments

To test the proposed algorithm in efficiency and solution quality, we performed some computational examples on a personal computer containing an Intel Core i5 processor of 2.40 GHz and 4GB of RAM. The code base is written in Matlab 2014a and interfaces LINPROG for the linear relaxation subproblems and CVX for the convex relaxation subproblems.

We consider some numerical examples in recent literatures [14, 20, 21, 22, 23, 24, 25, 26], and a randomly generated test problem to verify the performance of the algorithm. The numerical test and results are listed as follows.

#### Example 5.1

([23]).

${max−x1+2x2+23x1−4x2+5+4x1−3x2+4−2x1+x2+3s.t.x1+x2≤1.50,x1−x2≤0,0≤x1≤1,0≤x2≤1.$

#### Example 5.2

([14, 22, 23]).

${max4x1+3x2+3x3+503x2+3x3+50+3x1+4x2+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x3≤10,x1+6x2+3x3≤10,5x1+9x2+2x3≤10,9x1+7x2+3x3≤10,x1≥0,x2≥0,x3≥0.$

#### Example 5.3

([14]).

${max0.9×−x1+2x2+23x1−4x2+5−0.1×4x1−3x2+4−2x1+x2+3s.t.x1+x2≤1.50,x1−x2≤0,0≤x1≤1,0≤x2≤1.$

#### Example 5.4

([21]).

${max3x1+4x2+503x1+5x2+4x3+50−3x1+5x2+3x3+505x1+5x2+4x3+50−x1+2x2+4x3+505x2+4x3+50−4x1+3x2+3x3+503x2+3x3+50s.t.6x1+3x2+3x3≤10,10x1+3x2+8x3≤10,x1≥0,x2≥0,x3≥0.$

#### Example 5.5

([14, 24]).

${max37x1+73x2+1313x1+13x2+13+63x1−18x2+3913x1+26x2+13s.t.5x1−3x2=3,1.5≤x1≤3.$

#### Example 5.6

([20, 23]).

${max3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+4x2+504x1+3x2+2x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50s.t.6x1+3x2+3x3≤10,10x1+3x2+8x3≤10,x1≥0,x2≥0,x3≥0.$

#### Example 5.7

([14, 24]).

${max∑i=15cix+ridix+sis.t.Ax≤b,x≥0.$

where

$b=(15.7,31.8,−36.4,38.5,40.3,10.0,89.8,5.8,2.7,−16.3,−14.6,−72.7,57.7,−34.5,69.1)T$

$A=(−1.8−2.20.84.13.8−2.3−0.82.5−1.60.2−4.5−1.84.6−2.01.43.2−4.2−3.31.90.70.8−4.44.42.03.7−2.8−3.2−2.0−3.7−3.33.5−0.71.5−3.14.5−1.1−0.6−0.6−2.54.10.63.32.8−0.14.1−3.2−1.2−4.31.8−1.6−4.5−1.34.63.34.2−1.21.92.43.4−2.9−0.5−4.11.73.9−0.1−3.9−1.51.62.3−2.3−3.23.90.31.71.34.70.93.9−0.5−1.23.80.6−0.2−1.50.5−4.23.6−0.6−4.81.5−0.30.6−3.60.23.8−2.80.13.3−4.32.44.11.71.0−3.34.4−3.7−1.1−1.4−0.62.22.51.3−4.3−2.9−4.12.7−0.8−2.93.51.24.31.9−4.0−2.61.82.50.61.3−4.3−2.34.1−1.10.00.4−4.5−4.41.2−3.8−1.91.23.0−1.1−0.22.5−0.1−1.72.91.54.7−0.34.2−4.4−3.94.44.7−1.0−3.81.4−4.71.93.83.51.52.3−3.7−4.22.7−0.10.2−0.14.9−0.90.14.31.62.61.5−1.00.81.6)$

$c1=(0.0,−0.1,−0.3,0.3,0.5,0.5,−0.8,0.4,−0.4,0.2,0.2,−0.1),r1=14.6c2=(0.2,0.5,0.0,0.4,0.1,−0.6,−0.1,−0.2,−0.2,0.1,0.2,0.3),r2=7.1c3=(−0.1,0.3,0.0,0.1−0.1,0.0,0.3,−0.2,0.0,0.3,0.5,0.3),r3=1.7c4=(−0.1,0.5,0.1,0.1−0.2,−0.5,0.6,0.7,0.5,0.7,−0.1,0.1),r4=4.0c5=(0.7,−0.5,0.1,0.2−0.1,−0.3,0.0,−0.1,−0.2,0.6,0.5,−0.2),r5=6.8d1=(−0.3,−0.1,−0.1,−0.1,0.1,0.4,0.2,−0.2,0.4,0.2,−0.4,0.3),s1=14.2d2=(0.0,0.1,−0.1,0.3,0.3−0.2,0.3,0.0,−0.4,0.5,−0.3,0.1),s2=1.7d3=(0.8,−0.4,0.7,−0.4,−0.4,0.5,−0.2,−0.8,0.5,0.6,−0.2,0.6),s3=8.1d4=(0.0,0.6,−0.3,0.3,0.0,0.2,0.3,−0.6,−0.2,−0.5,0.8,−0.5),s4=26.9d5=(0.4,0.2,−0.2,0.9,0.5,−0.1,0.3,−0.8,−0.2,0.6,−0.2,−0.4),s5=3.7$

#### Example 5.8

([25]).

${minx1+x2+x3s.t.833.33252x4x1x6+100x6≤11250x5−1250x4x2x7+x4x7≤11250000−2500x5x3x8+x5x8≤10.0025x4+0.0025x6≤1−0.0025x4+0.0025x5+0.0025x7≤10.01x8−0.01x5≤1100≤x1≤100001000≤x2,x3≤1000010≤xi≤1000,i=4,5…,8.$

#### Example 5.9

([26]).

${min0.5(x1−10)x2−x1s.t.x2x3+x1+0.5x1x3≤1001≤xi≤100,i=1,2,3.$

#### Example 5.10

(Random test).

${max∑i=1pδi(ni)Tx+βi(di)Tx+γis.t.Ax≤b,x≥0.$

where the elements of the matrix ARm×n, bRm, ni, diRn and the elements of constant terms of denominators and numerators βi and γiR are randomly generated in the interval [0, 1], this agrees with the way random numbers are generated in [14], while in our experiment δiR is randomly generated in the interval [1, 1] rather than in interval [0, 1], this is much more challenging to test the performance of the algorithm. The results of the contrast experiments and the random tests are shown in Tables 1-3, and each symbol used in the table has the following meaning p, m and n represent the number of affine ratios in the objective function, the number of the constraints and the number of constrained variable respectively; Ave.time, Ave.Nod and Ave.Ite stand for the average CPU time in seconds, average number of the subproblem and iteration in the algorithm; ϵ express the error precision used in the algorithm, and α refer to the split ratio used in the branching operations.

Table 1

Results of the numerical contrast test 1-7.

Table 2

Computational results of random test 8 corresponding to the variation of the number of variable n.

Table 3

Computational results of random test 8 corresponding to the variation of the number of ratio p.

while

$x(18)=(6.24409,20.0249,3.79672,5.93972,0,7.43852,0,23.2833,0.515015,40.9896,0,3.14363)Tx(22)=(6.223689,20.060317,3.774684,5.947841,0,7.456686,0,23.312579,0.000204,41.031824,0,3.171106)Tx(23)=(578.973143,1359.572730,5110.701048,181.9898,295.5719,218.0101,286.4179,395.5719)x(24)=(87.614446,8.754375,1.413643,19.311410)x(our1)=(6.22442,20.05821,3.77441,5.94859,0.00001,7.45691,0.00002,23.31133,0.00012,41.03002,0.00001,3.17225)Tx(our2)=(579.326059,1359.9445,5109.977472,182.019317,295.600901,217.980682,286.418416,395.600901)x(our3)=(87.614446,8.754375,1.413643,19.311410).$

The computational results in Table 2 and Table 3 indicate that our algorithm has good performance, and is effective for special relatively large-scale optimization problems where the number of ratios in the objective function is not so large. Meanwhile, we find that, the average number of iterations and subproblems that need to be solved by the algorithm and the average CPU time do not substantially increase as the size of the problem becomes large. Based on the result of the above numerical examples, our algorithm is quite robust and efficient and so it can be used successfully to solve the sum of affine ratios problem SRP.

## 6 Concluding remarks

In this paper, a new kind of branch and bound optimization algorithm is presented for globally solving a class of sum of ratios problem. The algorithm is divided into three steps. First, the original problem is tactfully reformulated into an equivalent problem coupled with an outcome space, then the convex relaxation programming is established by utilizing the lower and upper bound of the auxiliary variables. At last, a new condensing operation based on the lower bound of the optimal value is presented for inciting the whole or a part of the investigated region in which there does not contain the global optimal solution of the equivalent problem. By combining the adapted partition rule with the accelerating technique into the reduced space branch and bound scheme, the presented algorithm is developed. Numerical results show that the proposed algorithm can suppress the rapid growth of the branching tree during the algorithm search process, and several random examples illustrate the high efficiency and stability of the algorithm.

## Acknowledgement

This paper is supported by the Science and Technology Key Project of Education Department of Henan Province (14A110024) and (15A110023);the National Natural Science Foundation of Henan Province (152300410097), the Science and Technology Projects of Henan Province (182102310941) the Cultivation Plan of Young Key Teachers in Colleges and Universities of Henan Province (2016GGJS-107), the Higher School Key Scientific Research Projects of Henan Province (18A110019, 17A110021), the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07), and thanks for all the references authors.

## References

• [1]

• [2]

Charnes A., Cooper W.W., Programming with linear fractional functionals, Nav. Res. Log. Q., 1962, 9, 181-186.

• [3]

Host R., Pardalos P.M., Handbook of Global Optimization, Kluwer Acdemic Publishers, Dordrecht, 1995, 495-608. Google Scholar

• [4]

Jiao H.W., Liu S.Y., Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comput. Appl. Math., 2017, 36(1), 225-247.

• [5]

Jiao H.W., Liu S.Y., Yin J., Zhao Y., Outcome space range reduction method for global optimization of sum of affine ratios problem, Open Math., 2016, 14, 736-746.

• [6]

Rao M.R., Cluster analysis and mathematical programming, J. Am. Stat. Assoc., 1971, 66, 622-626.

• [7]

Flak J.E., Palocsay S.W., Optimizing the sum of linear fractional functions, Recent advances in global optimization, Princeton Univerisity Press, Princeton, New Jersey, 1992. Google Scholar

• [8]

Almogy Y., Levin O., Parametric analysis of a multi-stage stochastic shipping problem, Proc. of the fifth IFORS Conf., 1964, 359-370. Google Scholar

• [9]

Konno H., Watanabe H., Bond portfolio optimization problems and their applications to rex tracking, J. Oper. Res. Soc. Jpn., 1996, 39, 295-306.

• [10]

Majihi J., Janardan R., Smid M., Gupta P., On some geometric optimization problems in layered manufacturing, Comp. Geom., 1999, 12, 219-239.

• [11]

Schwerdt J., Smid M., Janardan R., Johnson E., Majihi J., Protecting critical facets in layered manufacturing, Comp. Geom., 2000, 16, 187-210.

• [12]

Schaible S., Shi J., Fractional programming: the sum-of-ratios case, Optim. Method Softw., 2003, 18, 219-229.

• [13]

Stancu-Minasian I.M., A sixth bibliography of fractional programming, Optimization, 2006, 55, 405-428.

• [14]

Jiao H.W., Liu S.Y., A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res., 2015, 243, 723-730.

• [15]

Karmarkar N., A new polynomial-time algorithm for linear programming, Combinatorica, 1984, 4, 373-395.

• [16]

Cambini A., Martein L., Schaible S., On Maximizing a sum of ratios, J. Info. Optim. Scie., 1989, 10, 65-79.Google Scholar

• [17]

Konno H., Abe N., Minimization of the sum of three linear fractional functions, J. Global Optim., 1999, 15, 419-432.

• [18]

Konno H., Yajima Y., Matsui T., Parametric simplex algoriyhm for solving a special class of nonconvex minimization problems, J. Global Optim., 1991, 1, 65-81.

• [19]

Falk J.E., Palocsay S.W., Image space analysis of generalized fractional programs, J. Global Optim., 1994, 4, 63-88.

• [20]

Pei Y.G., Zhu D.T., Global optimization method for maximizing the sum of difference of convex functions ratios over nonconvex region, J. Appl. Math. Comput., 2013, 41, 153-169.

• [21]

Shen P.P., Wang C.F., Global optimization for sum of linear ratios problem with coefficients, Appl. Math. Comp., 2006, 176, 219-229.

• [22]

Wang Y.J., Shen P.P., Liang Z.A., A branch-and-bound algorithm to globally solve the sum of several linear ratios, Appl. Math. Comp., 2005, 168, 89-101.

• [23]

Jiao H.W., Liu S.Y., An Efficient Algorithm for Quadratic Sum-of-Ratios Fractional Programs Problem, Numer. Funct. Anal. Optim., 2017, 38(11), 1426-1445.

• [24]

Phuong N.T.H., Tuy H., A unified monotonic approach to generalized linear fractional programming, J. Global Optim., 2003, 26, 229-259.

• [25]

Lin M.H., Tsai J.F., Range reduction techniques for improving computational efficiency in global optimization of signomial geometric programming problems, Eur. J. Oper. Res., 2012, 216(1), 17-25.

• [26]

Dembo R.S., Avriel M., Optimal design of a membrane separation process using signomial programming, Math. Prog., 1978, 15(1), 12-25.

Accepted: 2018-04-04

Published Online: 2018-05-30

Competing interests: The authors declare that there is no conflict of interest regarding the publication of this paper.

Author’s contributions: Both authors contributed equally to the manuscript, and they read and approved the final manuscript.

Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 539–552, ISSN (Online) 2391-5455,

Export Citation