Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year

IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2016: 0.454
Source Normalized Impact per Paper (SNIP) 2016: 0.850

Mathematical Citation Quotient (MCQ) 2016: 0.23

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …

# Outcome space range reduction method for global optimization of sum of affine ratios problem

Hongwei Jiao
• Corresponding author
• School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
• Email
• Other articles by this author:
/ Sanyang Liu
/ Jingben Yin
• School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
• Other articles by this author:
/ Yingfeng Zhao
Published Online: 2016-10-06 | DOI: https://doi.org/10.1515/math-2016-0058

## Abstract

Many algorithms for globally solving sum of affine ratios problem (SAR) are based on equivalent problem and branch-and-bound framework. Since the exhaustiveness of branching rule leads to a significant increase in the computational burden for solving the equivalent problem. In this study, a new range reduction method for outcome space of the denominator is presented for globally solving the sum of affine ratios problem (SAR). The proposed range reduction method offers a possibility to delete a large part of the outcome space region of the denominators in which the global optimal solution of the equivalent problem does not exist, and which can be seen as an accelerating device for global optimization of the (SAR). Several numerical examples are presented to demonstrate the advantages of the proposed algorithm using new range reduction method in terms of both computational efficiency and solution quality.

MSC 2010: 90C26; 90C32

## 1 Introduction

In this article, we shall investigate the following sum of affine ratios problem:

$(SAR):{maxφ(x)=∑i=1pfi(x)gi(x)s.t.x∈∧={x∈Rn|Ax≤b,x≥0},$

where A is an m × n matrix; b is an m dimension vector; fi(x) and gi(x) are all affine functions; Λ is a nonempty compact set; and the denominator gi(x) ≠ 0.

Sum of affine ratios problem (SAR) has attracted the interest of researchers and practitioners for many years. This is because, from a practical point of view, the problem (SAR) has a broad wide of applications, such as transportation design [1, 2], production planning [3], finance investment [1, 2], etc. In these applications, p is usually less than four or five. From a research point of view, the problem (SAR) poses significant theoretical and computational challenges. This is mainly due to the fact that the problem is global optimization problem, i.e., it is well known to generally possess multiple local optimal solutions that are not globally optimal. So, it has evoked interest of many researchers and practitioners.

During the past several decades, with the assumption fi(x) ≥ 0, and gi(x) > 0 for any x ϵ Λ, several algorithms have been proposed for solving sum of affine ratios problem (SAR). For example, simplex and parametric simplex methods [1, 2], image space approach [8], branch-and-bound methods [915], trapezoidal branch-and-bound algorithm [16], monotonic optimization method [17], and so on. Recently, by utilizing three-level linear relaxation method, Jiao et al. [18] presented a global optimization algorithm for sum of generalized polynomial ratios problem; based on simplicial branch-and-bound framework, Pei et al. [19] proposed a global optimization algorithm for solving the sum of D.C. ratios problem; using new accelerating technique, Jiao and Liu [1, 2] proposed two branch-and-bound algorithms for sum affine ratios and sum of quadratic ratios problems, respectively; Jiao et al. [22] constructed a new linearizing technique for globally solving generalized linear multiplicative problem. Although these methods can all be used to solve special case of the sum of affine ratios problem (SAR), less work has been still done for globally solving the general sum of affine ratios problem investigated in this article.

The main purpose of this paper is to present a new range reduction method for outcome space branch-and-bound algorithm for sum of affine ratios problem (SAR). By making full use of the objective function of the equivalent problem and the currently known lower bound, a new outcome space range reduction method is constructed, which provides a theoretical possibility to delete a large part of the investigated outcome space region of the denominators in which there does not exist the global optimal solution of equivalent problem, and which can be used as an accelerating technique for the proposed outcome space algorithm for sum of affine ratios problem (SAR) to enhance the computational speed. Numerical experimental results are given to demonstrate that the computational efficiency of the proposed outcome space algorithm can be obviously enhanced by using this new range reduction method.

This paper is organized as follows. In Section 2, a new range reduction method based on outcome space region of the denominators is introduced. In Section 3, by combining the outcome space range reduction method with branch-and-bound technique, a new global optimization algorithm is expounded and its convergence is established. Section 4 presents some common test examples and their numerical results obtained. Finally, the concluding remarks of this paper are elaborated.

## 2 Outcome space range reduction method

In the following, we pay more attention to generate a range reduction method for reducing the investigated outcome space region of the denominators in which there does not contain the global optimal solution for the problem (SAR), and to use this method as an accelerating tool for accelerating the computational speed of the proposed outcome space algorithm for the problem (SAR).

By the assumption that the denominator gi(x) ≠ 0 for ∀ x ∊ Λ, and the continuity of fractional function $\frac{f{i}_{}\left(x\right)}{g{i}_{}\left(x\right)}$, we can get that gi(x) > 0 or gi(x) < 0. Therefore, without loss of generality, we can always assume that gi(x) > 0, i = 1, 2, . . ., T; gi(x) < 0, i = T + 1, T + 2, ..., p. Besides, if fi(x) < 0 for some i ∊ {1, 2, . . ., p}, by using the technique proposed in [20], fi(x) ≥ 0 always can be satisfied. Thus, without loss of generality, we can always suppose that fi(x) ≥ 0, and set

$Li0=minx ∈ ∧gi(x),Ui0=maxx ∈ ∧gi(x),i=1, 2, …,p,$

and denote the initial rectangle

$Y0={y∈Rp|Li0≤yi≤Ui0, i=1, 2, …, p},$

then the problem (SAR) can be transformed into the equivalent problem (Q) as follows.

$Q(Y0):{maxH0(x,y)=∑i=1pfi(x)yis.t.Hi(x,y)=yi−gi(x)≥0, i=1, 2, …, T,Hi(x,y)=yi−gi(x)≤0, i=T+1, T+2, …, p,x∈∧, y∈Y0.$

The key equivalence results for the problem (SAR) and the Q(Y0) are discussed in the following theorem.

If (x* , y*) is a global optimum point of the problem Q(Y0), then x* is also a global optimum point of the problem (SAR). Conversely, if x* is a global optimum point of the problem (SAR), then (x*, y*) is a global optimum point of the problem Q(Y0), where y* = gi(x*), i = 1, 2, ..., p.

The conclusion is obvious, therefore, it is omitted.□

By Theorem 2.1, in order to globally solve the problem (SAR), we may globally solve the problem Q(Y0) instead.

For each rectangle ${Y}^{k}=\left\{y\in {R}^{p}|{L}_{i}^{k}\le {y}_{i}\le {U}_{i}^{k},i=1,2,\dots ,p\right\}\subseteq {Y}^{0}$, the notations and functions of this paper are introduced as follows:

$fi(x)=∑j=1ncijxj+ei;H0U(x)=∑i=1p(∑j=1, cij>0ncijLikxj+∑j=1,cij<0ncijUikxj)+∑i=1,ei>0peiLik+∑i=1,ei<0peiUik;H0L(x)=∑i=1p(∑j=1,cij>0ncijUikxj+∑j=1,cij<0ncijLikxj)+∑i=1,ei>0peiUik+∑i=1,ei<0piLik;HiU(x)=Uik−gi(x),HiL(x)=Lik−gi(x),i=1,2,…,T;HiL(x)=Lik−gi(x),HiU(x)=Uik−gi(x),i=T+1,T+2,…,p.$

Consider the functions ${H}_{i}^{U}\left(x\right),{H}_{i}^{L}\left(x\right),{H}_{i}\left(x,y\right)$ for any x ∊ Λ, yYkY0, where i = 0, 1, 2, ..., p. Then the following two statements are valid.

(i) ${H}_{i}^{U}\left(x\right)\ge {H}_{i}\left(x,y\right)\ge {H}_{i}^{L}\left(x\right),i=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}p.$.

(ii) ${\mathrm{lim}}_{||}{}_{{U}^{k}-{L}^{k}||\to 0}\left[{H}_{i}^{U}\left(x\right)-{H}_{i}\left(x,y\right)\right]={\mathrm{lim}}_{||{U}^{k}-{L}^{k}||\to 0}\left[{H}_{i}\left(x,y\right)-{H}_{i}^{L}\left(x\right)\right]=0.$.

The proof can be easily given, here it is omitted.□

Based on the Theorem 2.2, for any YkY0, we can establish the corresponding linear relaxation program problem LRP(Yk) of the Q(Yk) as follows, which can provide a reliable upper bound for the global optimum value of the problem Q(Yk).

$LRP(Yk):{maxH0U(x)=∑i=1p(∑j=1,cij>0ncijLikxj+∑j=1,cij<0ncijUikxj)+∑i=1,ei>0peiLik+∑i=1,ei<0peiUiks.t.HiU(x)=Uik−gi(x)≥0,i=1,2,…,T,HiL(x)=Lik−gi(x)≤0,i=T+1,T+2,…,p,x∈∧.$

For any rectangle YkY0, in the process of algorithm iteration and still of interest, we want to recognize whether or not Yk contains global optimal solution of the Q(Y0). The proposed new outcome space range reduction method aims at replacing the rectangle Yk = [Lk, Uk] with a smaller rectangle $\stackrel{¨}{Y}=\left[\stackrel{¨}{L},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{¨}{U}\right]$ without deleting any global optimal solution of the Q(Y0).

In the following, we suppose without loss of generality that LB is a currently known lower bound of the global optimal value for the Q(Y0) at the iteration k, and that v(Yk) is the maximum value of the H0(x, y) in Yk and Λ, and set

$li0=minx∈∧fi(x),ui0=maxx∈∧fi(x),i=1,2,…,p,UB¯k=∑i=1Tui0Lik+∑i=T+1pli0Lik,βik={ui0LB_−UB¯k+ui0Lik, i=1, 2, …,T,li0LB_−UB¯k+li0Lik, i=T+1,T+2,…p.$

For any sub-rectangle ${Y}^{k}={\left({Y}_{i}^{k}\right)}_{p×1}={\left[{L}_{i}^{k},{U}_{i}^{k}\right]}_{p×1}\subseteq {Y}^{0}$, we have the following conclusions:

(i) If ${\overline{UB}}^{k}<\underset{_}{LB}$, then there exists no global optimal solution of the Q(Y0) in Yk.

(ii) If ${\overline{UB}}^{k}\ge \underset{_}{LB}$, then, for each s ∊ {1, 2, ..., p}, there exists no global optimal solution of the Q(Y0) in ${\underset{_}{Y}}^{k}$, where

$Y_k=(Y_ik)p×1⊆Y0 with Y_ik={Yik, i≠s,i=1,2,…,p,(βik,Uik]∩Yik, i=s∈{1,2,…,p},$

(i) If ${\overline{UB}}^{k}<\underset{_}{LB}$, then we have

$υ(Yk)=maxy∈Yk,x∈∧∑i=1pfi(x)yi≤∑i=1Tui0Lik+∑i=T+1pli0Lik=UB¯k

therefore, there exists no global optimal solution of the problem Q(Y0) in Yk.

(ii) If ${\overline{UB}}^{k}\ge \underset{_}{LB}$, then we can get the following results.

For any s ∊ {1, 2, ..., T}, for ∀ x ∊ Λ and $y\in {\underset{_}{Y}}^{k}$, since $0\le {l}_{i}^{0}\le {f}_{i}\left(x\right)\le {u}_{i}^{0},i=1,2,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}p;\text{\hspace{0.17em}}0\le {L}_{i}^{k}\le \text{\hspace{0.17em}}{y}_{i}\le {U}_{i}^{k},i=1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}p,i\ne s;0\le {\beta }_{s}^{k}<{y}_{s}\le {U}_{s}^{k};$; we can follow that

$maxy∈Y_k,x∈∧∑i=1pfi(x)yi≤maxy∈Y_k,x∈∧∑i=1,i≠sTfi(x)yi+maxy∈Y_k,x∈∧ fs(x)ys+maxy∈Y_k,x∈∧∑i=T+1pfi(x)yi<∑i=1,i≠sTui0Lik+us0βsk+∑i=T+1pli0Lik=UB¯k−us0Lsk+us0βsk=LB_.$

Therefore, there exists no any global optimization solution of the Q(Y0) in ${\underset{_}{Y}}^{k}$.

Using the same proving method, for any s ∊ {T + 1, T + 2, ..., p}, and for ∀ x ∊ Λ and $y\in {\underset{_}{Y}}^{k}$, since $0\le {l}_{i}^{0}\le {f}_{i}\left(x\right)\le {u}_{i}^{0},i=1,2,\dots ,p;\text{\hspace{0.17em}}0\le {L}_{i}^{k}\le {y}_{i}\le {U}_{i}^{k},\text{\hspace{0.17em}}i=1,2,\dots ,p,i\ne s;0\le {\beta }_{s}^{k}<{y}_{s}\le {U}_{s}^{k};$; we can follow that

$maxy∈Y_k,x∈∧∑i=1pfi(x)yi≤maxy∈Y_k,x∈∧∑i=1Tfi(x)yi+maxy∈Y_k,x∈∧∑i=T+1,i≠spfi(x)yi+maxy∈Y_k,x∈∧fs(x)ys<∑i=1,i≠sTui0Lik+∑i=T+1,i≠spli0Lik+ls0βsk=UB¯k−ls0Lsk+ls0βsk=LB_.$

Therefore, there exists no any global optimization solution of the Q(Y0) over ${\underset{_}{Y}}^{k}$.□

By the Theorem 2.3, we can construct the new range reduction method to cut away a part of outcome space region of the denominators in which the global optimal solution of the Q(Y0) does not exist. Assume that a sub-rectangle

$Yk=(Yik)p×1⊆Y0withYik=[Lik,Uik]$

will be reduced or deleted, then according to the Theorem 2.3, the checked rectangle Yk should be replaced by the new sub-rectangle

$Y..=(Y..i)p×1withY..i=[Lik,βik]∩Yik,i∈{1, 2, …,p}.$

## 3 Algorithm and its global convergence

In the following, first we shall describe a branching operation. Next, a new outcome space branch-and-bound algorithm using new range reduction method is proposed for solving the problem (SAR).

## 3.1 Branching operation

In the following algorithm, the branching process take place in outcome space of the denominators. Suppose that Y = {yRp|L ≤ yU} is a sub-rectangle of Y0, which will be partitioned, the proposed branching operation is described as follows. Denote q = arg max{Ui, – Li,|i = 1, 2, ..., p}, using the rectangle bisection method, we can divide the Y into two sub-rectangles ${\stackrel{^}{Y}}^{1}=\left\{y\in {R}^{p}|{L}_{i}\le {y}_{i}\le \frac{{L}_{i}+{U}_{i}}{2},i=q;\text{\hspace{0.17em}}{L}_{i}\le {y}_{i}\le {U}_{i},i=1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}p,\text{\hspace{0.17em}}i\ne q\right\}$, and ${\stackrel{^}{Y}}^{2}=\left\{y\in {R}^{p}|\frac{{L}_{i}+{U}_{i}}{2}\le {y}_{i}\le {U}_{i},i=q;{L}_{i}\le {y}_{i}\le {U}_{i},i=1,2,\text{\hspace{0.17em}}\dots ,p,\text{\hspace{0.17em}}i\ne q\right\}$. Obviously, by [23] we can follow easily that the proposed branching technique is exhaustive, it is to say, if {Yk} is a nested rectangle subsequence, which be generated by the proposed branching operation, then when k → ∞, there must exist a limitation point y* ∊ Rp satisfying $\underset{k}{\cap }{Y}^{K}=\left\{Y{}^{*}\right\}$.

## 3.2 Outcome space branch-and-bound algorithm

Based on the former linear relaxation program problem, the new outcome space range reduction operation and branching operation, an outcome space branch-and-bound algorithm for solving the (SAR) is described as follows.

Algorithm Steps

(1) Initializing step. Let k = 0, the initial set of all active node ${\Upsilon }_{0}=\left\{Y{}^{0}\right\}$, the given convergent error ɛ ≥ 0, and for each i = 1, 2, ..., p, calculate ${l}_{i}^{0}=\underset{x\in \wedge }{\mathrm{max}}{f}_{i}\left(x\right),{u}_{i}^{0}=\underset{x\in \wedge }{\mathrm{max}}{f}_{i}\left(x\right),{L}_{i}^{0}=\underset{x\in \wedge }{\mathrm{min}}{g}_{i}\left(x\right)\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{U}_{i}^{0}=\underset{x\in \wedge }{\mathrm{max}}{g}_{i}\left(x\right)$.

Compute the LRP(Y0) and set its optimal solution x0 and optimal value UB(Y0), respectively. Set $U{B}_{0}=UB\left({Y}^{0},{y}_{i}^{0}={g}_{{}_{i}}\left({x}^{0}\right)\left(i=1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}\dots ,P\right),L{B}_{0}={H}_{0}\left({x}^{0},{y}^{0}\right)$.

If UB0LB0 ≤ ɛ, then the proposed algorithm terminates, and (x0, y0) and x0 are the global optimal solutions of the Q(Y0) and the (SAR), respectively. Otherwise, let $\Omega =\overline{)0},k=1$, k = 1, and go on the following reducing step.

(2) Reducing step. Set $\underset{_}{LB}=L{B}_{k}{}_{-1}$, for the considered sub-rectangle Yk–1, we use the new outcome space range reduction method proposed in section 2 to condense the investigated sub-rectangle Yk–1, and still represent the remaining rectangle by Yk–1, and let LBk = LBk–1.

(3) Dividing step. According to the proposed branching operation, divide Yk–1 into two new sub-rectangles Yk,1 and Yk,2. Let the new partitioned set of sub-rectangles ${\overline{Y}}^{k}$ and $\Omega =\Omega \cup \left\{{Y}^{k-1}\right\}$.

Solve the (LRP) to get UB(Yk,t) and xk,t for each ${Y}^{k,t}\in {\overline{Y}}^{k}$, and set ${y}_{i}^{k,t}={g}_{i}\left({x}^{k,t}\right),i=1,2,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}p$. If $UB\left({Y}^{k,t}\right)<\text{\hspace{0.17em}}\text{\hspace{0.17em}}L{B}_{k},\text{\hspace{0.17em}}\text{let}\text{\hspace{0.17em}}{\overline{Y}}^{k}:={\overline{Y}}^{K}\{Y}^{k}{{}^{,}}^{1}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\Omega =\Omega \text{\hspace{0.17em}}\cap \left\{{Y}^{k,t}\right\}$, else renew the lower bound $L{B}_{k}=\mathrm{min}\left\{L{B}_{k},{H}_{0}\left({x}^{k,t},{y}^{k,t}\right)\right\}$, if possible.

(4) Renewing the upper bound step. Let the remaining subdivided set by ${\Upsilon }_{k-1}:=\left({\Upsilon }_{k-1}\{Y}^{{}_{k-1}}\right)\cup \text{\hspace{0.17em}}\left\{{\overline{Y}}^{k}\right\}$, and update the upper bound $U{B}^{k}=\underset{Y\in {\Upsilon }_{k-1}}{\mathrm{max}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}UB\left(Y\right)$.

(5) Convergence checking step. Denote ${\Upsilon }_{k}=\left\{Y|Y\in \text{\hspace{0.17em}}\left({\Upsilon }_{k-1}\cup \text{\hspace{0.17em}}\left\{{Y}^{k,1},{Y}^{k,2}\right\}\right),Y\notin \Omega \right\}\text{and}U{B}_{k}=\mathrm{min}\left\{UB\left(Y\right)|Y\in {\Upsilon }_{k}\right\}$. If UBkLBkɛ, then the proposed algorithm terminates, and (xk, yk) and xk are the global optimum solutions for the Q(Y0) and the (SAR), respectively. Else, let k = k + 1 and go back to the dividing step.

## 3.3 Global convergence of the proposed algorithm

The global convergence of the above algorithm is described in the following.

The proposed outcome space branch and bound algorithm using new range reduction method either terminates finitely to obtain the global–optimal solution for the (SAR), or produces an infinite solution sequence {xk} whose limitation point x* is a global optimal solution of the (SAR).

If the former algorithm terminates finitely at iteration k, k ≥ 0. Then when the algorithm is terminated, by solving the LRP(Yk), we can obtain the feasible solutions xk and (xk, yk) for the (SAR) and the (Q), where

$yik,t=gi(xk,t),i=1,2,…,p.$

By convergence checking step and the computational methods for the lower bound and upper bound, and Theorems 2.1 and 2.2, we have

$Ho(xk,yk)≥UBk−ε,UBk≥υ,υ≥Ho(xk,yk),φ(xk)=Ho(xk,yk).$

Combining the above inequalities and equality together, we can follow that

$υ−ε≤φ(xk)≤v.$

Therefore, if the algorithm terminates finitely at iteration k, then xk is the global –optimal solution of the (SAR).

If the proposed algorithm produces an infinite solution sequence {xk gi(xk) by computing the linear relaxation program LRP(Yk), and letting

$yik=gi(xk),i=1,2, …, p,$

then we can obtain an optimal solution sequence {xk, yk} for the Q(Yk). By the continuity of gi(xk), and ${g}_{i}\left({x}^{k}\right)={y}_{i}^{k}\in \left[{L}_{i}^{k},{U}_{i}^{k}\right],\left(i=1,\text{\hspace{0.17em}}....,\text{\hspace{0.17em}}p\right)$ the exhaustiveness of the branching operation, we have the following conclusions, for any i ∊ {1, 2, ..., p},

$gi(x*)= limk→∞gi(xk)=limk→∞[Lik,Uik]=limk→∞∩k[Lik,Uik]={yi*}.$

Thus, (x*, y*) is a feasible point for the Q(Y0), also since {UB(Yk)} is a decreasing bounding sequence, which satisfies {UB(Yk)} ≥ v, we can get that

$Ho(x*,y*)≤υ≤limk→∞ UB(Yk)=limk→∞ HoU(xk)=Ho(x*,y*).$

Therefore, from computational method of the lower bound and the continuity of φ(x), we have the following conclusions:

$limk→∞LB(Yk)=υ=Ho(x*,y*)=φ(x*)=limk→∞φ(xk)=limk→∞UB(Yk).$

Hence, x* is a global optimum solution for the (SAR), the proof is completed.□

## 3.4 Comparing with the algorithms in [9] and [15]

Based on the linearizing technique, Ji et al. [9] present a rectangle branch-and-bound algorithm for solving sum of linear ratios problem with assumption that all numerators of ratios are larger than or equal to 0, in the technique of [9], the branching process takes place in Rn, where n is the number of decision variables.

Use the same logic of the algorithm, by utilizing two-level linear approximation technique, Wang et al. [15] also present a rectangle branch-and-bound algorithm for solving sum of linear ratios problem with assumption that all numerators and denominators of ratios are positive, in the algorithm of [15], the branching process also takes place in Rn, where n also denotes the number of decision variables.

But in this paper, based on the new linear relaxation bounding technique and the new outcome space range reduction method, we present an accelerating outcome space branch-and-bound algorithm for globally solving the sum of affine ratios problem, which only requires that the numerators of ratios are not equal to 0. The proposed algorithm in this paper involves partitioning an p dimension outcome space rectangle of the denominators, which is obtained by computing the minimum value and maximum value of denominator of each ratio over the feasible region, where p is the number of ratios.

Compared with the known algorithms in [9] and [15], the proposed algorithm economizes the required computations by conducting the branch-and-bound search in Rp rather than in Rn or R2p, where p is number of ratios in the (SAR) and n is number of decision variables in the (SAR), this is because, in many practical problems, p is usually less than four or five, it is to say, p is much smaller than n in general. The numerical comparisons of computational performances for these algorithms show that the proposed algorithm in this paper has the more computational efficiency than the algorithm of [15].

## 4 Numerical experiments

To test the performance of the proposed new outcome space range reduction method, several test examples are implemented on Intel(R) Core(TM)2 i5-4590s CPU @3.0GHz microcomputer. Although these examples have a relative few variable, they are very challenging. The proposed outcome space branch-and-bound algorithm using the new range reduction method is coded in C++ procedure, and each linear relaxation program problem is solved by using simplex approach. Numerical results are listed in the following Tables 1-2.

In the following Table 1, two notations have been also used for column headers: Iter.: the number of iteration; Time(s): the running time of algorithm in seconds.

Table 1

Numerical results for Examples 4.1-4.11.

Example 4.1 ([1, 2]).

${max4x1+3x2+3x3+503x2+3x3+50+3x1+4x2+504x1+4x2+5x350+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x24x3+505x2+4x3+50s.t.2x1+x2+5x3≤10x1+6x2+3x3≤105x1+9x2+2x3≤109x1+7x2+3x3≤10x1,x2,x3≤0.$

Example 4.2 ([19]).

${max3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+5x2+503x1+5x2+3x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50s.t.6x1+3x2+3x3≤10,10x1+3x2+8x3≤10,x1,x2,x3≥0.$

Example 4.3

$max2x1+x2x1+2x2s.t.2x1+x2≤6,3x1+x2≤8,−x1+x2≥−1,x1,x2≥1.$

Example 4.4 ([17]).

${max37x1+73x2+1313x1+13x2+13+63x1−18x2+3913x1+26x2+13s.t.5x1−3x2=3,1.5≤x1≤3.$

Example 4.5 ([17]).

${max3x1+x2−2x3+0.82x1−x2+x3+4x1−2x2+x37x1+3x2−x3s.t.x1+x2−x3≤1,−x1+x2−x3≤−1,12x1+5x2+12x3≤34.8,12x1+12x2+7x3≤29.1,−6x1+x2+x3≤−4.1.$

Example 4.6 ([16]).

${min−3.333x1−3.000x2−1.0001.666x1+x2+1.000+−4.000x1−3.000x2−1.000x1+x2+1.000s.t5.000x1+4.000x2≤10.000,​ −x1≤−0.100,​​​​​​​​​​​​​−x2≤−0.100, −2.000x1−x2≤−2.000,​​​ x1,x2≥0.000.$

Example 4.7 ([1, 2]).

${max4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x3≤10,x1+6x2+2x3≤10,5x1+9x2+2x3≤10,9x1+7x2+3x3≤10,x1,x2,x3≥0.$

Example 4.8 ([1, 2]).

${max3x1+4x2+503x1+5x2+4x3+50+3x1+5x2+3x3+50−5x1−5x2−4x3−50+x1+2x2+4x3+50−5x2−4x3−50+4x1+3x2+3x3+50−3x2−3x3−50s.t6x1+3x2+3x3≤10,10x1+3x2+8x3≤10,x1,x2,x3≥0.$

Example 4.9 ([1, 2]).

${max37x1+73x2+1313x1+13x2+13+63x1−18x3+39−13x1−26x2−13+13x1+13x2+1363x1−18x2+39+13x1+26x2+13−37x1−73x2−13s.t.5x1−3x2=3,1.5≤x1≤3.$

Example 4.10 ([13]).

${min3x1+5x2+3x3+503x1+4x2+x3+50+3x1+4x2+503x1+3x2+50+4x1+2x2+4x3+504x1+x2+3x3+50s.t.2x1+x2+5x3≤10,x1+6x2+2x3≤10,5x1+9x2+2x3≤10,9x1+7x2+3x3≤10,x1,x2,x3≥0.$

Example 4.11 ([1, 2]).

${max4x1+3x2+3x3+503x3+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x3≤10,x1+6x2+2x3≤10,9x1+7x2+3x3≤10,x1,x2,x3≥0.$

Example 4.12 ([15]).

${max∑j=1p∑i=1ncjixi+ei∑i=1ndjixi+eis.t.∑i=1makixi≤bk, k=1, …, m,xi≥ 0.0, i=1,2, …, n,$

where all elements of cji, dji, aki, j = 1, ... , p, k = 1, ..., m, i = 1, ..., n, are randomly generated in the unit interval [0.0, 1.0]; all constant terms in denominators and numerators are the same number ei, which are randomly selected between 1.0 and 100.0; all elements of bk, k = 1, 2, ..., m, are all the same constant number 1.0. In example 12, m denotes the number of the constraints, n denotes the dimension of considered problem, p represents the number of affine ratios in objective function.

In Table 2, the following notations have been also used for column headers: Ave. Iter.: represents the average number of iterations of the algorithm; Ave. L.: represents the average number of the necessary maximum nodes of the algorithm; Ave. Time(s): stands for the running time of algorithm in seconds.

Table 2

Numerical results for Examples 4.12.

From Tables 1-2, numerical experimental results show that our algorithm can globally solve the problem (SAR).

## 5 Conclusion

In this paper, a new range reduction method for outcome space region of the denominator is presented for globally solving the sum of affine ratios problem (SAR). The proposed range reduction method can be used to discard a part of the investigated outcome space region of the denominators in which the global optimal solution of the equivalent problem (Q) does not exist, and this method can be seen as an accelerating tool to improve the computational efficiency of the proposed outcome space branch-and-bound algorithm for solving the problem (SAR). Several numerical examples are used to verify the superiority of the proposed outcome space branch-and-bound algorithm using the new range reduction method.

## Competing interests

The authors declare that they have no competing interests.

## Acknowledgement

This paper is supported by the National Natural Science Foundation of China under Grant (61373174), the Natural Science Foundation of Henan Province (152300410097,142300410352), the Higher School Key Scientific Research Projects of Henan Province (14A110024,16A110014,17A110021), the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07), the High-level Scientific Research Personnel Project for Henan Institute of Science and Technology(2015037), the Science and Technology Innovation Project for Henan Institute of Science and Technology.

## References

• [1]

Almogy Y., Levin O., Parametric Analysis of a Multistage Stochastic Shipping Problem, Operational Research 69, Tavistock Publications, London, England, 1970 Google Scholar

• [2]

Gao Y., Jin S., A global optimization algorithm for sum of linear ratios problem, J. Appl. Math., 2013, http://dx.doi.org/10.1155/2013/276245

• [3]

Cploantoni C.S., Manes R.P., Whinston A., Programming, Profit Rates, and Pricing Decisions, Accounting Review, 1969, 44, 467–481 Google Scholar

• [4]

Konno H., Inori M., Bond Portfolio Optimization by Bilinear Fractional Programming, J. Oper. Res. Soc. Jpn., 1989, 32, 143-158Google Scholar

• [5]

Konno H., Watanabe H., Bond Portfolio Optimization Problems and Their Applications to Index Tracking: A Partial Optimization Approach, J. Oper. Res. Soc. Jpn., 1996, 39, 295-306Google Scholar

• [6]

Charnes A., Cooper W.W., Programming with linear fractional functions, Naval Res. Logistics Quarterly, 1962, 9, 181-186Google Scholar

• [7]

Konno H., Yajima Y., Matsui T., Parametric simplex algorithms for solving a special class of nonconvex minimization problem, J. Glob. Optim., 1991, 1, 65-81Google Scholar

• [8]

Falk J.E., Palocsay S.W., Image space analysis of generalized fractional programs, J. Glob. Optim., 1994, 4, 63-88 Google Scholar

• [9]

Ji Y., Zhang K.C., Qu S.J., A deterministic global optimization algorithm, Appl. Math. Comput., 2007, 185, 382-387 Google Scholar

• [10]

Konno H., Fukaishi K., A Branch and Bound Algorithm for Solving Low Rank Linear Multiplicative and Fractional Programming Problems, J. Glob. Optim., 2000, 18, 283-299Google Scholar

• [11]

Shen P., Wang C., Global optimization for sum of linear ratios problem with coefficients, Appl. Math. Comput., 2006, 176, 219-229 Google Scholar

• [12]

Wang C., Shen P., A global optimization algorithm for linear fractional programming, Appl. Math. Comput., 2008, 204, 281-287 Google Scholar

• [13]

Jiao H., A branch and bound algorithm for globally solving a class of nonconvex programming problem, Nonlinear Anal., 2009, 70, 1113-1123 Google Scholar

• [14]

Jiao H., Chen Y., A note on a deterministic global optimization algorithm, Appl. Math. Comput., 2008, 202, 67-70 Google Scholar

• [15]

Wang Y., Shen P., Liang Z., A branch-and-bound algorithm to globally solve the sum of several linear ratios, Appl. Math. Comput., 2005, 168, 89-101 Google Scholar

• [16]

Shi Y., Global optimization for sum of ratios problems, Master’s Thesis, Henan Normal University, Xinxiang, China, 2011 Google Scholar

• [17]

Phuong N.T.H., Tuy H., A unified monotonic approach to generalized linear fractional programming, J. Glob. Optim., 2003, 26, 229-259 Google Scholar

• [18]

Jiao H., Wang Z., Chen Y., Global optimization algorithm for sum of generalized polynomial ratios problem, Appl. Math. Modell., 2013, 37, 187-197 Google Scholar

• [19]

Pei Y., Zhu D., Global optimization method for maximizing the sum of difference of convex functions ratios over nonconvex region, J. Appl. Math. Comput., 2013, 41, 153-169 Google Scholar

• [20]

Jiao H.-W., Liu S.-Y., A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res., 2015, 243(3), 723-730

• [21]

Jiao H., Liu S., Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comput. Appl. Math., (in press), DOI:10.1007/s40314-015-0224-5

• [22]

Jiao H.-W., Liu S.-Y., Zhao Y.-F., Effective algorithm for solving the generalized linear multiplicative problem with generalized polynomial constraints, Appl. Math. Model., 2015, 39(23-24), 7568-7582 Google Scholar

• [23]

Horst R., Tuy H., Global Optimization: Deterministic Approaches, 2nd Edition, Spring Verlag, Berlin, 1993 Google Scholar

Accepted: 2016-08-02

Published Online: 2016-10-06

Published in Print: 2016-01-01

Citation Information: Open Mathematics, ISSN (Online) 2391-5455,

Export Citation