Jump to ContentJump to Main Navigation
Show Summary Details
In This Section

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year


IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2015: 0.521
Source Normalized Impact per Paper (SNIP) 2015: 1.233

Mathematical Citation Quotient (MCQ) 2015: 0.39

Open Access
Online
ISSN
2391-5455
See all formats and pricing
In This Section
Volume 14, Issue 1 (Jan 2016)

Issues

Outcome space range reduction method for global optimization of sum of affine ratios problem

Hongwei Jiao
  • Corresponding author
  • School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
  • Email:
/ Sanyang Liu
  • School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
/ Jingben Yin
  • School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
/ Yingfeng Zhao
  • School of Mathematics and Statistics, Xidian University, Xi’an 710071, China
Published Online: 2016-10-06 | DOI: https://doi.org/10.1515/math-2016-0058

Abstract

Many algorithms for globally solving sum of affine ratios problem (SAR) are based on equivalent problem and branch-and-bound framework. Since the exhaustiveness of branching rule leads to a significant increase in the computational burden for solving the equivalent problem. In this study, a new range reduction method for outcome space of the denominator is presented for globally solving the sum of affine ratios problem (SAR). The proposed range reduction method offers a possibility to delete a large part of the outcome space region of the denominators in which the global optimal solution of the equivalent problem does not exist, and which can be seen as an accelerating device for global optimization of the (SAR). Several numerical examples are presented to demonstrate the advantages of the proposed algorithm using new range reduction method in terms of both computational efficiency and solution quality.

Keywords: Range reduction method; Global optimization; Sum of affine ratios; Linear relaxation program; Branch-and-bound

MSC 2010: 90C26; 90C32

1 Introduction

In this article, we shall investigate the following sum of affine ratios problem:

(SAR):{maxφ(x)=i=1pfi(x)gi(x)s.t.x={xRn|Axb,x0},

where A is an m × n matrix; b is an m dimension vector; fi(x) and gi(x) are all affine functions; Λ is a nonempty compact set; and the denominator gi(x) ≠ 0.

Sum of affine ratios problem (SAR) has attracted the interest of researchers and practitioners for many years. This is because, from a practical point of view, the problem (SAR) has a broad wide of applications, such as transportation design [1, 2], production planning [3], finance investment [1, 2], etc. In these applications, p is usually less than four or five. From a research point of view, the problem (SAR) poses significant theoretical and computational challenges. This is mainly due to the fact that the problem is global optimization problem, i.e., it is well known to generally possess multiple local optimal solutions that are not globally optimal. So, it has evoked interest of many researchers and practitioners.

During the past several decades, with the assumption fi(x) ≥ 0, and gi(x) > 0 for any x ϵ Λ, several algorithms have been proposed for solving sum of affine ratios problem (SAR). For example, simplex and parametric simplex methods [1, 2], image space approach [8], branch-and-bound methods [915], trapezoidal branch-and-bound algorithm [16], monotonic optimization method [17], and so on. Recently, by utilizing three-level linear relaxation method, Jiao et al. [18] presented a global optimization algorithm for sum of generalized polynomial ratios problem; based on simplicial branch-and-bound framework, Pei et al. [19] proposed a global optimization algorithm for solving the sum of D.C. ratios problem; using new accelerating technique, Jiao and Liu [1, 2] proposed two branch-and-bound algorithms for sum affine ratios and sum of quadratic ratios problems, respectively; Jiao et al. [22] constructed a new linearizing technique for globally solving generalized linear multiplicative problem. Although these methods can all be used to solve special case of the sum of affine ratios problem (SAR), less work has been still done for globally solving the general sum of affine ratios problem investigated in this article.

The main purpose of this paper is to present a new range reduction method for outcome space branch-and-bound algorithm for sum of affine ratios problem (SAR). By making full use of the objective function of the equivalent problem and the currently known lower bound, a new outcome space range reduction method is constructed, which provides a theoretical possibility to delete a large part of the investigated outcome space region of the denominators in which there does not exist the global optimal solution of equivalent problem, and which can be used as an accelerating technique for the proposed outcome space algorithm for sum of affine ratios problem (SAR) to enhance the computational speed. Numerical experimental results are given to demonstrate that the computational efficiency of the proposed outcome space algorithm can be obviously enhanced by using this new range reduction method.

This paper is organized as follows. In Section 2, a new range reduction method based on outcome space region of the denominators is introduced. In Section 3, by combining the outcome space range reduction method with branch-and-bound technique, a new global optimization algorithm is expounded and its convergence is established. Section 4 presents some common test examples and their numerical results obtained. Finally, the concluding remarks of this paper are elaborated.

2 Outcome space range reduction method

In the following, we pay more attention to generate a range reduction method for reducing the investigated outcome space region of the denominators in which there does not contain the global optimal solution for the problem (SAR), and to use this method as an accelerating tool for accelerating the computational speed of the proposed outcome space algorithm for the problem (SAR).

By the assumption that the denominator gi(x) ≠ 0 for ∀ x ∊ Λ, and the continuity of fractional function fi(x)gi(x), we can get that gi(x) > 0 or gi(x) < 0. Therefore, without loss of generality, we can always assume that gi(x) > 0, i = 1, 2, . . ., T; gi(x) < 0, i = T + 1, T + 2, ..., p. Besides, if fi(x) < 0 for some i ∊ {1, 2, . . ., p}, by using the technique proposed in [20], fi(x) ≥ 0 always can be satisfied. Thus, without loss of generality, we can always suppose that fi(x) ≥ 0, and set

Li0=minxgi(x),Ui0=maxxgi(x),i=1,2,,p,

and denote the initial rectangle

Y0={yRp|Li0yiUi0,i=1,2,,p},

then the problem (SAR) can be transformed into the equivalent problem (Q) as follows.

Q(Y0):{maxH0(x,y)=i=1pfi(x)yis.t.Hi(x,y)=yigi(x)0,i=1,2,,T,Hi(x,y)=yigi(x)0,i=T+1,T+2,,p,x,yY0.

The key equivalence results for the problem (SAR) and the Q(Y0) are discussed in the following theorem.

Theorem 2.1: If (x* , y*) is a global optimum point of the problem Q(Y0), then x* is also a global optimum point of the problem (SAR). Conversely, if x* is a global optimum point of the problem (SAR), then (x*, y*) is a global optimum point of the problem Q(Y0), where y* = gi(x*), i = 1, 2, ..., p.

Proof: The conclusion is obvious, therefore, it is omitted.□

By Theorem 2.1, in order to globally solve the problem (SAR), we may globally solve the problem Q(Y0) instead.

For each rectangle Yk={yRp|LikyiUik,i=1,2,,p}Y0, the notations and functions of this paper are introduced as follows:

fi(x)=j=1ncijxj+ei;H0U(x)=i=1p(j=1,cij>0ncijLikxj+j=1,cij<0ncijUikxj)+i=1,ei>0peiLik+i=1,ei<0peiUik;H0L(x)=i=1p(j=1,cij>0ncijUikxj+j=1,cij<0ncijLikxj)+i=1,ei>0peiUik+i=1,ei<0piLik;HiU(x)=Uikgi(x),HiL(x)=Likgi(x),i=1,2,,T;HiL(x)=Likgi(x),HiU(x)=Uikgi(x),i=T+1,T+2,,p.

Theorem 2.2: Consider the functions HiU(x),HiL(x),Hi(x,y) for any x ∊ Λ, yYkY0, where i = 0, 1, 2, ..., p. Then the following two statements are valid.(i) HiU(x)Hi(x,y)HiL(x),i=0,1,2,,p..(ii) lim||UkLk||0[HiU(x)Hi(x,y)]=lim||UkLk||0[Hi(x,y)HiL(x)]=0..

Proof: The proof can be easily given, here it is omitted.□

Based on the Theorem 2.2, for any YkY0, we can establish the corresponding linear relaxation program problem LRP(Yk) of the Q(Yk) as follows, which can provide a reliable upper bound for the global optimum value of the problem Q(Yk).

LRP(Yk):{maxH0U(x)=i=1p(j=1,cij>0ncijLikxj+j=1,cij<0ncijUikxj)+i=1,ei>0peiLik+i=1,ei<0peiUiks.t.HiU(x)=Uikgi(x)0,i=1,2,,T,HiL(x)=Likgi(x)0,i=T+1,T+2,,p,x.

For any rectangle YkY0, in the process of algorithm iteration and still of interest, we want to recognize whether or not Yk contains global optimal solution of the Q(Y0). The proposed new outcome space range reduction method aims at replacing the rectangle Yk = [Lk, Uk] with a smaller rectangle Y¨=[L¨,U¨] without deleting any global optimal solution of the Q(Y0).

In the following, we suppose without loss of generality that LB is a currently known lower bound of the global optimal value for the Q(Y0) at the iteration k, and that v(Yk) is the maximum value of the H0(x, y) in Yk and Λ, and set

li0=minxfi(x),ui0=maxxfi(x),i=1,2,,p,UB¯k=i=1Tui0Lik+i=T+1pli0Lik,βik={ui0LB_UB¯k+ui0Lik,i=1,2,,T,li0LB_UB¯k+li0Lik,i=T+1,T+2,p.

Theorem 2.3: For any sub-rectangle Yk=(Yik)p×1=[Lik,Uik]p×1Y0, we have the following conclusions:

(i) If UB¯k<LB_, then there exists no global optimal solution of the Q(Y0) in Yk.

(ii) If UB¯kLB_, then, for each s ∊ {1, 2, ..., p}, there exists no global optimal solution of the Q(Y0) in Y_k, where

Y_k=(Y_ik)p×1Y0withY_ik={Yik,is,i=1,2,,p,(βik,Uik]Yik,i=s{1,2,,p},

Proof: (i) If UB¯k<LB_, then we have

υ(Yk)=maxyYk,xi=1pfi(x)yii=1Tui0Lik+i=T+1pli0Lik=UB¯k<LB_,

therefore, there exists no global optimal solution of the problem Q(Y0) in Yk.

(ii) If UB¯kLB_, then we can get the following results.

For any s ∊ {1, 2, ..., T}, for ∀ x ∊ Λ and yY_k, since 0li0fi(x)ui0,i=1,2,,p;0LikyiUik,i=1,2,,p,is;0βsk<ysUsk;; we can follow that

maxyY_k,xi=1pfi(x)yimaxyY_k,xi=1,isTfi(x)yi+maxyY_k,xfs(x)ys+maxyY_k,xi=T+1pfi(x)yi<i=1,isTui0Lik+us0βsk+i=T+1pli0Lik=UB¯kus0Lsk+us0βsk=LB_.

Therefore, there exists no any global optimization solution of the Q(Y0) in Y_k.

Using the same proving method, for any s ∊ {T + 1, T + 2, ..., p}, and for ∀ x ∊ Λ and yY_k, since 0li0fi(x)ui0,i=1,2,,p;0LikyiUik,i=1,2,,p,is;0βsk<ysUsk;; we can follow that

maxyY_k,xi=1pfi(x)yimaxyY_k,xi=1Tfi(x)yi+maxyY_k,xi=T+1,ispfi(x)yi+maxyY_k,xfs(x)ys<i=1,isTui0Lik+i=T+1,ispli0Lik+ls0βsk=UB¯kls0Lsk+ls0βsk=LB_.

Therefore, there exists no any global optimization solution of the Q(Y0) over Y_k.□

By the Theorem 2.3, we can construct the new range reduction method to cut away a part of outcome space region of the denominators in which the global optimal solution of the Q(Y0) does not exist. Assume that a sub-rectangle

Yk=(Yik)p×1Y0withYik=[Lik,Uik]

will be reduced or deleted, then according to the Theorem 2.3, the checked rectangle Yk should be replaced by the new sub-rectangle

Y..=(Y..i)p×1withY..i=[Lik,βik]Yik,i{1,2,,p}.

3 Algorithm and its global convergence

In the following, first we shall describe a branching operation. Next, a new outcome space branch-and-bound algorithm using new range reduction method is proposed for solving the problem (SAR).

3.1 Branching operation

In the following algorithm, the branching process take place in outcome space of the denominators. Suppose that Y = {yRp|L ≤ yU} is a sub-rectangle of Y0, which will be partitioned, the proposed branching operation is described as follows. Denote q = arg max{Ui, – Li,|i = 1, 2, ..., p}, using the rectangle bisection method, we can divide the Y into two sub-rectangles Y^1={yRp|LiyiLi+Ui2,i=q;LiyiUi,i=1,2,,p,iq}, and Y^2={yRp|Li+Ui2yiUi,i=q;LiyiUi,i=1,2,,p,iq}. Obviously, by [23] we can follow easily that the proposed branching technique is exhaustive, it is to say, if {Yk} is a nested rectangle subsequence, which be generated by the proposed branching operation, then when k → ∞, there must exist a limitation point y* ∊ Rp satisfying kYK={Y*}.

3.2 Outcome space branch-and-bound algorithm

Based on the former linear relaxation program problem, the new outcome space range reduction operation and branching operation, an outcome space branch-and-bound algorithm for solving the (SAR) is described as follows.

Algorithm Steps

(1) Initializing step. Let k = 0, the initial set of all active node ϒ0={Y0}, the given convergent error ɛ ≥ 0, and for each i = 1, 2, ..., p, calculate li0=maxxfi(x),ui0=maxxfi(x),Li0=minxgi(x)andUi0=maxxgi(x).

Compute the LRP(Y0) and set its optimal solution x0 and optimal value UB(Y0), respectively. Set UB0=UB(Y0,yi0=gi(x0)(i=1,2,,P),LB0=H0(x0,y0).

If UB0LB0 ≤ ɛ, then the proposed algorithm terminates, and (x0, y0) and x0 are the global optimal solutions of the Q(Y0) and the (SAR), respectively. Otherwise, let Ω=0,k=1, k = 1, and go on the following reducing step.

(2) Reducing step. Set LB_=LBk1, for the considered sub-rectangle Yk–1, we use the new outcome space range reduction method proposed in section 2 to condense the investigated sub-rectangle Yk–1, and still represent the remaining rectangle by Yk–1, and let LBk = LBk–1.

(3) Dividing step. According to the proposed branching operation, divide Yk–1 into two new sub-rectangles Yk,1 and Yk,2. Let the new partitioned set of sub-rectangles Y¯k and Ω=Ω{Yk1}.

Solve the (LRP) to get UB(Yk,t) and xk,t for each Yk,tY¯k, and set yik,t=gi(xk,t),i=1,2,,p. If UB(Yk,t)<LBk,letY¯k:=Y¯K\Yk,1andΩ=Ω{Yk,t}, else renew the lower bound LBk=min{LBk,H0(xk,t,yk,t)}, if possible.

(4) Renewing the upper bound step. Let the remaining subdivided set by ϒk1:=(ϒk1\Yk1){Y¯k}, and update the upper bound UBk=maxYϒk1UB(Y).

(5) Convergence checking step. Denote ϒk={Y|Y(ϒk1{Yk,1,Yk,2}),YΩ}andUBk=min{UB(Y)|Yϒk}. If UBkLBkɛ, then the proposed algorithm terminates, and (xk, yk) and xk are the global optimum solutions for the Q(Y0) and the (SAR), respectively. Else, let k = k + 1 and go back to the dividing step.

3.3 Global convergence of the proposed algorithm

The global convergence of the above algorithm is described in the following.

Theorem 3.1: The proposed outcome space branch and bound algorithm using new range reduction method either terminates finitely to obtain the global–optimal solution for the (SAR), or produces an infinite solution sequence {xk} whose limitation point x* is a global optimal solution of the (SAR).

Proof: If the former algorithm terminates finitely at iteration k, k ≥ 0. Then when the algorithm is terminated, by solving the LRP(Yk), we can obtain the feasible solutions xk and (xk, yk) for the (SAR) and the (Q), where

yik,t=gi(xk,t),i=1,2,,p.

By convergence checking step and the computational methods for the lower bound and upper bound, and Theorems 2.1 and 2.2, we have

Ho(xk,yk)UBkε,UBkυ,υHo(xk,yk),φ(xk)=Ho(xk,yk).

Combining the above inequalities and equality together, we can follow that

υεφ(xk)v.

Therefore, if the algorithm terminates finitely at iteration k, then xk is the global –optimal solution of the (SAR).

If the proposed algorithm produces an infinite solution sequence {xk gi(xk) by computing the linear relaxation program LRP(Yk), and letting

yik=gi(xk),i=1,2,,p,

then we can obtain an optimal solution sequence {xk, yk} for the Q(Yk). By the continuity of gi(xk), and gi(xk)=yik[Lik,Uik],(i=1,....,p) the exhaustiveness of the branching operation, we have the following conclusions, for any i ∊ {1, 2, ..., p},

gi(x*)=limkgi(xk)=limk[Lik,Uik]=limkk[Lik,Uik]={yi*}.

Thus, (x*, y*) is a feasible point for the Q(Y0), also since {UB(Yk)} is a decreasing bounding sequence, which satisfies {UB(Yk)} ≥ v, we can get that

Ho(x*,y*)υlimkUB(Yk)=limkHoU(xk)=Ho(x*,y*).

Therefore, from computational method of the lower bound and the continuity of φ(x), we have the following conclusions:

limkLB(Yk)=υ=Ho(x*,y*)=φ(x*)=limkφ(xk)=limkUB(Yk).

Hence, x* is a global optimum solution for the (SAR), the proof is completed.□

3.4 Comparing with the algorithms in [9] and [15]

Based on the linearizing technique, Ji et al. [9] present a rectangle branch-and-bound algorithm for solving sum of linear ratios problem with assumption that all numerators of ratios are larger than or equal to 0, in the technique of [9], the branching process takes place in Rn, where n is the number of decision variables.

Use the same logic of the algorithm, by utilizing two-level linear approximation technique, Wang et al. [15] also present a rectangle branch-and-bound algorithm for solving sum of linear ratios problem with assumption that all numerators and denominators of ratios are positive, in the algorithm of [15], the branching process also takes place in Rn, where n also denotes the number of decision variables.

But in this paper, based on the new linear relaxation bounding technique and the new outcome space range reduction method, we present an accelerating outcome space branch-and-bound algorithm for globally solving the sum of affine ratios problem, which only requires that the numerators of ratios are not equal to 0. The proposed algorithm in this paper involves partitioning an p dimension outcome space rectangle of the denominators, which is obtained by computing the minimum value and maximum value of denominator of each ratio over the feasible region, where p is the number of ratios.

Compared with the known algorithms in [9] and [15], the proposed algorithm economizes the required computations by conducting the branch-and-bound search in Rp rather than in Rn or R2p, where p is number of ratios in the (SAR) and n is number of decision variables in the (SAR), this is because, in many practical problems, p is usually less than four or five, it is to say, p is much smaller than n in general. The numerical comparisons of computational performances for these algorithms show that the proposed algorithm in this paper has the more computational efficiency than the algorithm of [15].

4 Numerical experiments

To test the performance of the proposed new outcome space range reduction method, several test examples are implemented on Intel(R) Core(TM)2 i5-4590s CPU @3.0GHz microcomputer. Although these examples have a relative few variable, they are very challenging. The proposed outcome space branch-and-bound algorithm using the new range reduction method is coded in C++ procedure, and each linear relaxation program problem is solved by using simplex approach. Numerical results are listed in the following Tables 1-2.

In the following Table 1, two notations have been also used for column headers: Iter.: the number of iteration; Time(s): the running time of algorithm in seconds.

Table 1

Numerical results for Examples 4.1-4.11.

Example 4.1 ([1, 2]).

{max4x1+3x2+3x3+503x2+3x3+50+3x1+4x2+504x1+4x2+5x350+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x24x3+505x2+4x3+50s.t.2x1+x2+5x310x1+6x2+3x3105x1+9x2+2x3109x1+7x2+3x310x1,x2,x30.

Example 4.2 ([19]).

{max3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+5x2+503x1+5x2+3x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50s.t.6x1+3x2+3x310,10x1+3x2+8x310,x1,x2,x30.

Example 4.3

max2x1+x2x1+2x2s.t.2x1+x26,3x1+x28,x1+x21,x1,x21.

Example 4.4 ([17]).

{max37x1+73x2+1313x1+13x2+13+63x118x2+3913x1+26x2+13s.t.5x13x2=3,1.5x13.

Example 4.5 ([17]).

{max3x1+x22x3+0.82x1x2+x3+4x12x2+x37x1+3x2x3s.t.x1+x2x31,x1+x2x31,12x1+5x2+12x334.8,12x1+12x2+7x329.1,6x1+x2+x34.1.

Example 4.6 ([16]).

{min3.333x13.000x21.0001.666x1+x2+1.000+4.000x13.000x21.000x1+x2+1.000s.t5.000x1+4.000x210.000,x10.100,x20.100,2.000x1x22.000,x1,x20.000.

Example 4.7 ([1, 2]).

{max4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x310,x1+6x2+2x310,5x1+9x2+2x310,9x1+7x2+3x310,x1,x2,x30.

Example 4.8 ([1, 2]).

{max3x1+4x2+503x1+5x2+4x3+50+3x1+5x2+3x3+505x15x24x350+x1+2x2+4x3+505x24x350+4x1+3x2+3x3+503x23x350s.t6x1+3x2+3x310,10x1+3x2+8x310,x1,x2,x30.

Example 4.9 ([1, 2]).

{max37x1+73x2+1313x1+13x2+13+63x118x3+3913x126x213+13x1+13x2+1363x118x2+39+13x1+26x2+1337x173x213s.t.5x13x2=3,1.5x13.

Example 4.10 ([13]).

{min3x1+5x2+3x3+503x1+4x2+x3+50+3x1+4x2+503x1+3x2+50+4x1+2x2+4x3+504x1+x2+3x3+50s.t.2x1+x2+5x310,x1+6x2+2x310,5x1+9x2+2x310,9x1+7x2+3x310,x1,x2,x30.

Example 4.11 ([1, 2]).

{max4x1+3x2+3x3+503x3+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x310,x1+6x2+2x310,9x1+7x2+3x310,x1,x2,x30.

Example 4.12 ([15]).

{maxj=1pi=1ncjixi+eii=1ndjixi+eis.t.i=1makixibk,k=1,,m,xi0.0,i=1,2,,n,

where all elements of cji, dji, aki, j = 1, ... , p, k = 1, ..., m, i = 1, ..., n, are randomly generated in the unit interval [0.0, 1.0]; all constant terms in denominators and numerators are the same number ei, which are randomly selected between 1.0 and 100.0; all elements of bk, k = 1, 2, ..., m, are all the same constant number 1.0. In example 12, m denotes the number of the constraints, n denotes the dimension of considered problem, p represents the number of affine ratios in objective function.

In Table 2, the following notations have been also used for column headers: Ave. Iter.: represents the average number of iterations of the algorithm; Ave. L.: represents the average number of the necessary maximum nodes of the algorithm; Ave. Time(s): stands for the running time of algorithm in seconds.

Table 2

Numerical results for Examples 4.12.

From Tables 1-2, numerical experimental results show that our algorithm can globally solve the problem (SAR).

5 Conclusion

In this paper, a new range reduction method for outcome space region of the denominator is presented for globally solving the sum of affine ratios problem (SAR). The proposed range reduction method can be used to discard a part of the investigated outcome space region of the denominators in which the global optimal solution of the equivalent problem (Q) does not exist, and this method can be seen as an accelerating tool to improve the computational efficiency of the proposed outcome space branch-and-bound algorithm for solving the problem (SAR). Several numerical examples are used to verify the superiority of the proposed outcome space branch-and-bound algorithm using the new range reduction method.

Competing interests

The authors declare that they have no competing interests.

Acknowledgement

This paper is supported by the National Natural Science Foundation of China under Grant (61373174), the Natural Science Foundation of Henan Province (152300410097,142300410352), the Higher School Key Scientific Research Projects of Henan Province (14A110024,16A110014,17A110021), the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07), the High-level Scientific Research Personnel Project for Henan Institute of Science and Technology(2015037), the Science and Technology Innovation Project for Henan Institute of Science and Technology.

References

  • [1]

    Almogy Y., Levin O., Parametric Analysis of a Multistage Stochastic Shipping Problem, Operational Research 69, Tavistock Publications, London, England, 1970

  • [2]

    Gao Y., Jin S., A global optimization algorithm for sum of linear ratios problem, J. Appl. Math., 2013, http://dx.doi.org/10.1155/2013/276245

  • [3]

    Cploantoni C.S., Manes R.P., Whinston A., Programming, Profit Rates, and Pricing Decisions, Accounting Review, 1969, 44, 467–481

  • [4]

    Konno H., Inori M., Bond Portfolio Optimization by Bilinear Fractional Programming, J. Oper. Res. Soc. Jpn., 1989, 32, 143-158

  • [5]

    Konno H., Watanabe H., Bond Portfolio Optimization Problems and Their Applications to Index Tracking: A Partial Optimization Approach, J. Oper. Res. Soc. Jpn., 1996, 39, 295-306

  • [6]

    Charnes A., Cooper W.W., Programming with linear fractional functions, Naval Res. Logistics Quarterly, 1962, 9, 181-186

  • [7]

    Konno H., Yajima Y., Matsui T., Parametric simplex algorithms for solving a special class of nonconvex minimization problem, J. Glob. Optim., 1991, 1, 65-81

  • [8]

    Falk J.E., Palocsay S.W., Image space analysis of generalized fractional programs, J. Glob. Optim., 1994, 4, 63-88

  • [9]

    Ji Y., Zhang K.C., Qu S.J., A deterministic global optimization algorithm, Appl. Math. Comput., 2007, 185, 382-387

  • [10]

    Konno H., Fukaishi K., A Branch and Bound Algorithm for Solving Low Rank Linear Multiplicative and Fractional Programming Problems, J. Glob. Optim., 2000, 18, 283-299

  • [11]

    Shen P., Wang C., Global optimization for sum of linear ratios problem with coefficients, Appl. Math. Comput., 2006, 176, 219-229

  • [12]

    Wang C., Shen P., A global optimization algorithm for linear fractional programming, Appl. Math. Comput., 2008, 204, 281-287

  • [13]

    Jiao H., A branch and bound algorithm for globally solving a class of nonconvex programming problem, Nonlinear Anal., 2009, 70, 1113-1123

  • [14]

    Jiao H., Chen Y., A note on a deterministic global optimization algorithm, Appl. Math. Comput., 2008, 202, 67-70

  • [15]

    Wang Y., Shen P., Liang Z., A branch-and-bound algorithm to globally solve the sum of several linear ratios, Appl. Math. Comput., 2005, 168, 89-101

  • [16]

    Shi Y., Global optimization for sum of ratios problems, Master’s Thesis, Henan Normal University, Xinxiang, China, 2011

  • [17]

    Phuong N.T.H., Tuy H., A unified monotonic approach to generalized linear fractional programming, J. Glob. Optim., 2003, 26, 229-259

  • [18]

    Jiao H., Wang Z., Chen Y., Global optimization algorithm for sum of generalized polynomial ratios problem, Appl. Math. Modell., 2013, 37, 187-197

  • [19]

    Pei Y., Zhu D., Global optimization method for maximizing the sum of difference of convex functions ratios over nonconvex region, J. Appl. Math. Comput., 2013, 41, 153-169

  • [20]

    Jiao H.-W., Liu S.-Y., A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res., 2015, 243(3), 723-730 [Web of Science]

  • [21]

    Jiao H., Liu S., Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comput. Appl. Math., (in press), DOI:10.1007/s40314-015-0224-5 [Crossref]

  • [22]

    Jiao H.-W., Liu S.-Y., Zhao Y.-F., Effective algorithm for solving the generalized linear multiplicative problem with generalized polynomial constraints, Appl. Math. Model., 2015, 39(23-24), 7568-7582

  • [23]

    Horst R., Tuy H., Global Optimization: Deterministic Approaches, 2nd Edition, Spring Verlag, Berlin, 1993

About the article

Received: 2016-02-20

Accepted: 2016-08-02

Published Online: 2016-10-06

Published in Print: 2016-01-01



Citation Information: Open Mathematics, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2016-0058. Export Citation

© 2016 Jiao et al., published by De Gruyter Open.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)

Comments (0)

Please log in or register to comment.
Log in