Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

# An effective algorithm for globally solving quadratic programs using parametric linearization technique

Shuai Tang
/ Yuzhen Chen
• School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
• Other articles by this author:
/ Yunrui Guo
• School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang 453003, China
• Other articles by this author:
Published Online: 2018-11-10 | DOI: https://doi.org/10.1515/math-2018-0108

## Abstract

In this paper, we present an effective algorithm for globally solving quadratic programs with quadratic constraints, which has wide application in engineering design, engineering optimization, route optimization, etc. By utilizing new parametric linearization technique, we can derive the parametric linear programming relaxation problem of the quadratic programs with quadratic constraints. To improve the computational speed of the proposed algorithm, some interval reduction operations are used to compress the investigated interval. By subsequently partitioning the initial box and solving a sequence of parametric linear programming relaxation problems the proposed algorithm is convergent to the global optimal solution of the initial problem. Finally, compared with some known algorithms, numerical experimental results demonstrate that the proposed algorithm has higher computational efficiency.

MSC 2010: 90C20; 90C26; 65K05

## 1 Introduction

$(QP):minF0(y)=∑k=1ndk0yk+∑j=1n∑k=1npjk0yjyks.t.Fi(y)=∑k=1ndkiyk+∑j=1n∑k=1npjkiyjyk≤βi,i=1,…,m,y∈Y0={y∈Rn:l0≤y≤u0},$

where ${l}^{0}=\left({l}_{1}^{0},\dots ,{l}_{n}^{0}{\right)}^{T},{u}^{0}=\left({u}_{1}^{0},\dots ,{u}_{n}^{0}{\right)}^{T};{p}_{jk}^{i},{d}_{k}^{i}$and βi are all arbitrary real numbers. QP has wide application in route optimization, engineering design, investment portfolio, management decision, production programs, etc. In addition, QP usually owns multiple local optimal solutions which are not global optimal solutions, i.e., in these classes of problems there exist important theoretical difficulties and computational complexities. Thus, it is necessary to present an efficient algorithm for globally solving QP.

In last several decades, many algorithms have been developed for solving QP and its special cases, such as duality-bounds algorithm [1], branch-and-reduce methods [2, 3, 4, 5, 6], approximation approach [7], branch-and-bound approaches [8, 9, 10], and so on. Except for the above ones, some algorithms for polynomial programming [11, 12, 13, 14, 15, ] and quadratic fractional programming [16, 17, ] also can be used to solve QP. Although these algorithms can be used to solve QP and its special cases, less work has been still done for globally solving the investigated quadratic programs with quadratic constraints.

This paper will present a new global optimization branch-and-bound algorithm for solving QP. First of all, we derive a new parametric linearization technique. By utilizing this linearization technique, the initial QP can be converted into a parametric linear programming relaxation problem, which can be used to determine the lower bounds of the global optimal values of the initial QP and its subproblems. Based on the branch-and-bound framework, a new global optimization branch-and-bound algorithm is designed for solving QP, the proposed algorithm is convergent to the global optimal solution of the initial QP by successively subdividing the initial box and by solving the converted parametric linear programming relaxation problems. To improve the computational speed of the proposed branch-and-bound algorithm, some interval reduction operations are used to compress the investigated interval. Finally, compared with some known algorithms, numerical experimental results show higher computational efficiency of the proposed branch-and-bound algorithm.

The remaining sections of this paper are listed as follows. Firstly, in order to derive the parametric linear programming relaxation problem of QP, Section 2 presents a new parametric linearization technique. Secondly, based on the branch-and-bound framework in Section 3, by combing the derived parametric linear programming relaxation problem with the interval reduction operations, an effective branch-and-bound algorithm is constructed for globally solving QP. Thirdly, compared with some known methods, some existent test problems are used to verify the computational feasibility of the proposed algorithm in Section 4. Finally, some conclusions are obtained.

## 2 New parametric linearization approach

In this section, we will present a new parametric linearization approach for constructing the parametric linear programming relaxation problem of QP. The detailed deriving process of the parametric linearization approach is given as follows. Without loss of generality, we assume that Y = {(y1, y2, . . ., yn)T ϵ Rn : ljyjuj, j = 1, . . ., n}⊆ Y0, = (γjk)n×n ϵ Rn×n is a symmetric matrix, and γjk ϵ {0, 1}.

For convenience in expression, for any y ϵ Y, for any j ϵ {1, 2, . . ., n}, k ϵ {1, 2, . . ., n}, jk, we define

$yk(γkk)=lk+γkk(uk−lk),yk(1−γkk)=lk+(1−γkk)(uk−lk),fkk(y)=yk2,f_kk(y,Y,γkk)=[yk(γkk)]2+2yk(γkk)[yk−yk(γkk)],f¯kk(y,Y,γkk)=[yk(γkk)]2+2yk(1−γkk)[yk−yk(γkk)],yj(γjk)=lj+γjk(uj−lj),yk(γjk)=lk+γjk(uk−lk),yj(1−γjk)=lj+(1−γjk)(uj−lj),yk(1−γjk)=lk+(1−γjk)(uk−lk),(yj+yk)(γjk)=(lj+lk)+γjk(uj+uk−lj−lk),(yj+yk)(1−γjk)=(lj+lk)+(1−γjk)(uj+uk−lj−lk),fjk(y)=yjyk=(yj+yk)2−yj2−yk22,f_jk(y,Y,γjk)=12{{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]}},f¯jk(y,Y,γjk)=12{{[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]}}.$

It is obvious that

$yk(0)=lk,yk(1)=uk,(yj+yk)(0)=lj+lk,(yj+yk)(1)=uj+uk.$

Theorem 2.1. For any k ϵ {1, 2, . . ., n}, for any y ϵ Y, then we have:

• (i) The following inequalities hold:

$f_kk(y,Y,γkk)≤fkk(y)≤f¯kk(y,Y,γkk),[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]≤yj2≤[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)],yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]≤yk2≤[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)],(yj+yk)2≤[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)],(yj+yk)2≥[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)],$(1)

and

$f_jk(y,Y,γjk)≤fjk(y)≤f¯jk(y,Y,γjk).$(2)
1. (ii) The following limitations hold:

$lim∥u−l∥→0[fkk(y)−f_kk(y,Y,γkk)]=0,$(3)$lim∥u−l∥→0⁡[f¯kk(y,Y,γkk)−fkk(y)]=0,lim∥u−l∥→0⁡[yj2−{[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]}]=0,lim∥u−l∥→0⁡[[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]−yj2]=0,lim∥u−l∥→0⁡[yk2−{[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]}]=0,lim∥u−l∥→0⁡[[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]−yk2]=0,lim∥u−l∥→0⁡[[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]−(yj+yk)2]=0,lim∥u−l∥→0⁡[(yj+yk)2−{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[(yj+yk)−(yj+yk)(γjk)]}]=0,$(4)$lim∥u−l∥→0[fjk(y)−f_jk(y,Y,γjk)]=(0)$(5)

and

$lim∥u−l∥→0[f¯jk(y,Y,γjk)−fjk(y)]=0.$(6)

Proof. (i) By the mean value theorem, for any y ϵ Y, there exists a point ξk = αyk + (1 − α)ykkk), where α ϵ [0, 1], such that

$yk2=[yk(γkk)]2+2ξk[yk−yk(γkk)].$

If γkk = 0, then we have

$ξk≥lk=yk(γkk)andyk−yk(γkk)=yk−lk≥0.$

If γkk = 1, then it follows that

$ξk≤uk=yk(γkk)andyk−yk(γkk)=yk−uk≤0.$

Thus, we can get that

$fkk(y)=yk2=[yk(γkk)]2+2ξk[yk−yk(γkk)]≥[yk(γkk)]2+2yk(γkk)[yk−yk(γkk)]=f_kk(y,Y,γkk).$

Similarly, if γkk = 0, then we have

$ξk≤uk=yk(1−γkk)andyk−yk(γkk)=yk−lk≥0.$

If γkk = 1, then it follows that

$ξk≥lk=yk(1−γkk)andyk−yk(γkk)=yk−uk≤0.$

Thus, we can get that

$fkk(y)=yk2=[yk(γkk)]2+2ξk[yk−yk(γkk)]≤[yk(γkk)]2+2yk(1−γkk)[yk−yk(γkk)]=f¯kk(y,Y,γkk).$

Therefore, for any y ϵ Y, we have that

$f_kk(y,Y,γkk)≤fkk(y)≤f¯kk(y,Y,γkk).$

From the inequality (1), replacing γkk by γjk, and replacing yk by yj, we can get that

$[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]≤yj2≤[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)].$

From the inequality (1), replacing γkk by γjk, we can get that

$[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]≤yk2≤[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)].$

From (1), replacing γkk and yk by γjk and (yj + yk), respectively, we can get that

$(yj+yk)2≤[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[(yj+yk)−(yj+yk)(γjk)],(yj+yk)2≥[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[(yj+yk)−(yj+yk)(γjk)].$

From the former several inequalities, it is easy to follow that

$fjk(y)=yjyk=(yj+yk)2−yj2−yk22≥12{{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]}},=f_jk(y,Y,γjk)$

and

$fjk(y)=yjyk=(yj+yk)2−yj2−yk22≤12{{[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]}},=f¯jk(y,Y,γjk).$

Therefore, we have

$f_jk(y,Y,γjk)≤fjk(y)≤f¯jk(y,Y,γjk).$

(ii) Since

$fkk(y)−f_kk(y,Y,γkk)=yk2−{[yk(γkk)]2+2yk(γkk)[yk−yk(γkk)]}=(yk−yk(γkk))2≤(uk−lk)2,$(7)

we have

$lim∥u−l∥→0[fkk(y)−f_kk(y,Y,γkk)]=0.$

Also since

$f¯kk(y,Y,γkk)−fkk(y)=[yk(γkk)]2+2yk(1−γkk)[yk−yk(γkk)]−yk2=(yk(γkk)+yk)(yk(γkk)−yk)+2yk(1−γkk)(yk−yk(γkk))=[yk−yk(γkk)][2yk(1−γkk)−yk(γkk)−yk]=[yk−yk(γkk)][yk(1−γkk)−yk(γkk)]+[yk−yk(γkk)][yk(1−γkk)−yk]≤2(uk−lk)2.$(8)

Therefore, it follows that

$lim∥u−l∥→0[f¯kk(y,Y,γkk)−fkk(y)]=0.$

From the limitations (3) and (4), replacing γkk and yk by γjk and yj, respectively, we have

$lim∥u−l∥→0[yj2−{[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]}]=0$

and

$lim∥u−l∥→0[[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]−yj2]=0.$

From the limitations (3) and (4), replacing γkk by γjk, it follows that

$lim∥u−l∥→0[yk2−{[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]}]=0$

and

$lim∥u−l∥→0[[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]−yk2]=0.$

By the limitations (3) and (4), replacing γkk and yk by γjk and (yj + yk), respectively, we can get that

$lim∥u−l∥→0[[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]−(yj+yk)2]=0$

and

$lim∥u−l∥→0[(yj+yk)2−{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)]}]=0.$

From the inequalities (7) and (8), we have

$fjk(y)−f_jk(y,Y,γjk)=yjyk−f_jk(y,Y,γjk)=(yj+yk)2−yj2−yk22−12{{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]}},=12[{[yj(γjk)]2+2yj(1−γjk)[yj−yj(γjk)]}−yj2]+12[{[yk(γjk)]2+2yk(1−γjk)[yk−yk(γjk)]}−yk2]+12{(yj+yk)2−{[(yj+yk)(γjk)]2+2(yj+yk)(γjk)[yj+yk−(yj+yk)(γjk)]}≤(uj−lj)2+(uk−lk)2+12(uk+uj−lj−lk)2.$

Thus, we can get that

$lim∥u−l∥→0[fjk(y)−f_jk(y,Y,γjk)]=0.$

Also from the inequalities (7) and (8), we get that

$f¯jk(y,Y,γjk)−fjk(y)=f¯jk(y,Y,γjk)−yjyk=12{{[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]}−{[yj(γjk)]2+2yj(γjk)[yj−yj(γjk)]}−{[yk(γjk)]2+2yk(γjk)[yk−yk(γjk)]}}−(yj+yk)2−yj2−yk22=12{{[(yj+yk)(γjk)]2+2(yj+yk)(1−γjk)[yj+yk−(yj+yk)(γjk)]−(yj+yk)2}$

Thus, it follows that

$lim∥u−l∥→0[f¯jk(y,Y,γjk)−fjk(y)]=0.$

Without loss of generality, for any Y = [l, u] b Y0, for any parameter matrix = (γjk)n×n, for any y ϵ Y and i ϵ {0, 1, . . ., m}, we let

$f_kki(y,Y,γkk)=pkkif_kk(y,Y,γkk),ifpkki>0,pkkif¯kk(y,Y,γkk),ifpkki<0,$$f¯kki(y,Y,γkk)=pkkif¯kk(y,Y,γkk),ifpkki>0,pkkif_kk(y,Y,γkk),ifpkki<0,$$f_jki(y,Y,γjk)=pjkif_jk(y,Y,γjk),ifpjki>0,j≠k,pjkif¯jk(y,Y,γjk),ifpjki<0,j≠k,$$f¯jki(y,Y,γjk)=pjkif¯jk(y,Y,γjk),ifpjki>0,j≠k,pjkif_jk(y,Y,γjk),ifpjki<0,j≠k.$$FiL(y,Y,γ)=∑k=1n(dkiyk+f_kki(y,Y,γkk))+∑j=1n∑k=1,k≠jnf_jki(y,Y,γjk).$$FiU(y,Y,γ)=∑k=1n(dkiyk+f¯kki(y,Y,γkk))+∑j=1n∑k=1,k≠jnf¯jki(y,Y,γjk).$

Theorem 2.2. For any y ϵ Y = [l, u] ⊆ Y0, for any parameter matrix = (γjk)n×n, for any i = 0, 1, . . ., m, we can get the following conclusions:

$FiL(y,Y,γ)≤Fi(y)≤FiU(y,Y,γ),$$lim∥u−l∥→0[Fi(y)−FiL(y,Y,γ)]=0$

and

$lim∥u−l∥→0[FiU(y,Y,γ)−Fi(y)]=0.$

Proof. (i) From (1) and (2), for any j, k ϵ {1, . . ., n}, we can get that

$f_kki(y,Y,γkk)≤pkkiyk2≤f¯kki(y,Y,γkk),$(9)

and

$f_jki(y,Y,γjk)≤pjkiyjyk≤f¯jki(y,Y,γjk).$(10)

By (9) and (10), for any y ϵ YY0, we have that

$FiL(y,Y,γ)=∑k=1n(dkiyk+f_kki(y,Y,γkk))+∑j=1n∑k=1,k≠jnf_jki(y,Y,γjk)≤∑k=1ndkiyk+∑k=1npkkiyk2+∑j=1n∑k=1,k≠jnpjkiyjyk=Fi(y)≤∑k=1n(dkiyk+f¯kki(y,Y,γkk))+∑j=1n∑k=1,k≠jnf¯jki(y,Y,γjk)=FiU(y,Y,γ).$

Therefore, we obtain that

$FiL(y,Y,γ)≤Fi(y)≤FiU(y,Y,γ).$

(ii)

$Fi(y)−FiL(y,Y,γ)=∑k=1ndkiyk+∑k=1npkkiyk2+∑j=1n∑k=1,k≠jnpjkiyjyk−[∑k=1ndkiyk+∑k=1nf_kki(y,Y,γkk)+∑j=1n∑k=1,k≠jnf_jki(y,Y,γjk)]=∑k=1n[pkkiyk2−f_kki(y,Y,γkk)]+∑j=1n∑k=1,k≠jn[pjkiyjyk−f_jki(y,Y,γjk)]=∑k=1,pkki>0npkki[fkk(y)−f_kk(y,Y,γkk)]+∑k=1,pkki<0npkki[fkk(y)−f¯kk(y,Y,γkk)]+∑j=1n∑k=1,k≠j,pjki>0npjki[fjk(y)−f_jk(y,Y,γjk)]+∑j=1n∑k=1,k≠j,pjki<0npjki[fjk(y)−f¯jk(y,Y,γjk)].$

From (3)-(6), we can obtain that $\underset{\parallel u-l\parallel \to 0}{lim}\left[{f}_{kk}\left(y\right)-{\underset{_}{f}}_{kk}\left(y,Y,{\gamma }_{kk}\right)\right]=0,\underset{\parallel u-l\parallel \to 0}{lim}\left[{\overline{f}}_{kk}\left(y,Y,{\gamma }_{kk}\right)-{f}_{kk}\left(y\right)\right]=0,$ $\underset{\parallel u-l\parallel \to 0}{lim}\left[{f}_{jk}\left(y\right)-{\underset{_}{f}}_{jk}\left(y,Y,{\gamma }_{jk}\right)\right]=0\phantom{\rule{thickmathspace}{0ex}}\text{and}\phantom{\rule{thickmathspace}{0ex}}\underset{\parallel u-l\parallel \to 0}{lim}\left[{\overline{f}}_{jk}\left(y,Y,{\gamma }_{jk}\right)-{f}_{jk}\left(y\right)\right]=0.$

Therefore, we obtain that

$lim∥u−l∥→0[Fi(y)−FiL(y,Y,γ)]=0.$

Similarly to the proof above, we can get that

$lim∥u−l∥→0[FiU(y,Y,γ)−Fi(y)]=0.$

The proof is completed.

By Theorem 2.2, we can establish the following parametric linear programming relaxation problem (PLPRP) of QP over Y:

$(PLPRP):minF0L(y,Y,γ),s.t.FiL(y,Y,γ)≤βi,i=1,…,m,y∈Y={y:l≤y≤u}.$

where

$FiL(y,Y,γ)=∑k=1n(dkiyk+f_kki(y,Y,γkk))+∑j=1n∑k=1,k≠jnf_jki(y,Y,γjk).$

Based on the above parametric linearization process, we know that the PLPRP can provide a reliable lower bound for the minimum value of QP in the region Y. In addition, Theorem 2.2 ensures that the PLPRP will sufficiently approximate the QP as ∥ul∥ → 0, and this ensures the global convergence of the proposed branch-and-bound algorithm.

## 3 Branch-and-bound algorithm

In this section, a new global optimization branch-and-bound algorithm is presented for solving the QP. In this algorithm, there are several important operations, which are given as follows.

## 3.1 Basic operations

Branching Operation: The branching operation will produce a more precise subdivision. Here we select a rectangle bisection method, which is sufficient to guarantee the global convergence of the branch-and-bound algorithm. For any selected rectangle Y = [l , u ] ⊆ Y0, let η ϵ arg max$\eta \in \mathrm{arg}max\left\{{u}_{j}^{{}^{\prime }}-{l}_{j}^{{}^{\prime }}:j=1,2,\dots ,n\right\},$we can subdivide Y into two new sub-rectangles Y′1 and Y′2 by partitioning interval $\left[{\underset{_}{y}}_{\eta }^{{}^{\prime }},{\overline{y}}_{\eta }^{{}^{\prime }}\right]$into two sub-intervals $\left[{\underset{_}{y}}_{\eta }^{{}^{\prime }},\left({\underset{_}{y}}_{\eta }^{{}^{\prime }}+{\overline{y}}_{\eta }^{{}^{\prime }}\right)/2\right]$and $\left[\left({\underset{_}{y}}_{\eta }^{{}^{\prime }}+{\overline{y}}_{\eta }^{{}^{\prime }}\right)/2,{\overline{y}}_{\eta }^{{}^{\prime }}\right]$

Bounding Operation: For each sub-rectangle YY0, which has not been fathomed, determining the lower bound operation need to solve the parametric linear programming relaxation problem over the corresponding rectangle, and denote by LBs = min{LB(Y)|Y ϵ Ωs}, where Ωs is the remaining set of sub-rectangle after s iterations. Determining the upper bound operation need to judge the feasibility of the midpoint of each inspected sub-rectangle Y and the optimal solution of the PLPRP over each inspected sub-rectangle Y, where Y ϵ Ωs. At the same time, we need to compute the objective function values of these known feasible points of QP, and we let UBs = min{F0(Y) : Y ϵ Ø} be the best upper bound, where Ø is the set of the known feasible points.

Interval Reduction Operation: To enhance the running speed of the proposed algorithm, some interval reduction operations are given as follows.

For convenience in expression, for any y ϵ Y and i ϵ {0, 1, . . ., m}, we let UB be the current upper bound of the (QP), and let

$FiL(y,Y,γ)=∑j=1ncij(γ)yj+ei(γ),$$LBi(γ)=∑j=1nmin{cij(γ)lj,cij(γ)uj}+ei(γ).$

Similarly to Theorem 3.1 of Ref.[17], for any investigated sub-rectangle Y = (Yj)n ϵ Y0, we have the following conclusions:

1. (a) If LB0() > UB, then: the entire sub-rectangle Y can be abandoned.

2. (b) If LB0(γ) ≤ UB and c0q(γ) > 0 for some q ϵ {1, 2, . . ., n}, then the region Yq can be replaced by $\left[{l}_{q},\frac{UB-L{B}_{0}\left(\gamma \right)+min\left\{{c}_{0q}\left(\gamma \right){l}_{q},{c}_{0q}\left(\gamma \right){u}_{q}\right\}}{{c}_{0q}\left(\gamma \right)}\right]\bigcap {Y}_{q}$.

3. (c) If LB0(γ) ≤ UB and c0q(γ) < 0 for some q ϵ {1, 2, . . ., n}, then the region Yq can be replaced by $\left[\frac{UB-L{B}_{0}\left(\gamma \right)+min\left\{{c}_{0q}\left(\gamma \right){l}_{q},{c}_{0q}\left(\gamma \right){u}_{q}\right\}}{{c}_{0q}\left(\gamma \right)},{u}_{q}\right]\bigcap {Y}_{q}$.

4. (d) If LBi(γ) > βi for some i ϵ {1, . . ., m}, then the entire sub-rectangle Y can be abandoned.

5. (e) If LBi(γ) ≤ βi for some i ϵ {1, . . ., m} and ciq(γ) > 0 for some q ϵ {1, 2, . . ., n}, then the region Yq can be replaced by $\left[{l}_{q},\frac{{\beta }_{i}-L{B}_{i}\left(\gamma \right)+min\left\{{c}_{iq}\left(\gamma \right){l}_{q},{c}_{iq}\left(\gamma \right){u}_{q}\right\}}{{c}_{iq}\left(\gamma \right)}\right]\bigcap {Y}_{q}$.

6. (f) If LBi(γ) ≤ βi for some i ϵ {1, . . ., m} and ciq(γ) < 0 for some q ϵ {1, 2, . . ., n}, then the region Yq can be replaced by $\left[\frac{{\beta }_{i}-L{B}_{i}\left(\gamma \right)+min\left\{{c}_{iq}\left(\gamma \right){l}_{q},{c}_{iq}\left(\gamma \right){u}_{q}\right\}}{{c}_{iq}\left(\gamma \right)},{u}_{q}\right]\bigcap {Y}_{q}$.

From the above conclusions, to improve the convergent speed of the proposed algorithm, we can construct some interval reduction operations to compress the investigated rectangular area.

## 3.2 New branch-and-bound algorithm

For any sub-rectangle YsY0, let LB(Ys) be the optimal value of the PLPRP over the sub-rectangle Ys, and let ys = y(Ys) be the optimal solution of the PLPRP over the sub-rectangle Ys. Combining the former branching operation and bounding operation with interval reduction operations, a new global optimization branch-and-bound algorithm is described as follows.

## Algorithm Steps

Step 0. Given the termination error ϵ and the random parameter matrix γ. For the rectangle Y0, solve the PLPRP to obtain its optimal solution y0 and optimal value LB(Y0), let LB0 = LB(Y0) be the initial lower bound. If y0 is feasible to the QP, let UB0 = F0(y0) be the initial upper bound, else let the initial upper bound UB0 = +∞. If UB0LB0ϵ, the algorithm stops, y0 is an ϵ-global optimal solution of the QP. Else, let 0 = {Y0}, 𝛬 = Ø and s = 1.

1. Step 1. Let the new upper bound be UBs = UBs−1. By using the branching operation, partition the selected rectangle Ys−1 into two sub-rectangles Ys,1 and Ys,2, and let 𝛬 = 𝛬 ∪ {Ys−1} be the set of the deleted sub-rectangles.

2. Step 2. For each sub-rectangle Ys,t , t = 1, 2, use the former interval reduction operations to compress its interval range, still let Ys,t be the remaining sub-rectangle.

3. Step 3. For each t ϵ {1, 2}, solve the PLPRP over the sub-rectangle Ys,t to get its optimal solution ys,t and optimal value LB(Ys,t), respectively. And denote by Ωs = {Y|Y ϵ Ωs−1 ∪ {Ys,1, Ys,2}, Y𝛬} and LBs = min{LB(Y)|Y ϵ Ωs}.

4. Step 4. If the midpoint ymid of each sub-rectangle Ys,t is feasible to the QP, let θ ≔= θ ∪ {ymid} and UBs = min{UBs, F0(ymid)}. If the optimal solution ys,t of the PLPRP is feasible to the QP, let UBs = min{UBs, F0(ys,t)}, and let ys be the best known feasible point which is satisfied with UBs = F0(ys).

5. Step 5. If UBsLBsϵ, then the algorithm stops, and ys is an ϵ-global optimal solution of the QP. Otherwise, let s = s + 1, and go to Step 1.

## 3.3 Global convergence

Without loss of generality, let v be the global optimal value of the QP, the global convergence of the proposed algorithm is proved as follows.

Theorem 3.1. If the presented algorithm terminates after finite s iterations, then ys is an ϵ-global optimal solution of the QP; if the presented algorithm does not stop after finite iterations, then an infinite sub-sequence {Ys} of the rectangle Y0 will be generated, and its accumulation point will be the global optimal solution of the QP.

Proof. If after s finite iterations, where s is a finite number such that s ≥ 0, the presented algorithm stops, then it will follow that UBsLBs + ϵ. From Step 4, we can obtain that there must exist a feasible point ys, which is satisfied with vUBs = F0(ys). By the structure of the presented branch-and-bound algorithm, we have LBkv. Combining the above inequalities together, we have

$v≤UBs=F0(ys)≤LBs+ϵ≤v+ϵ.$

So that ys is an ϵ-global optimal solution of the QP.

If the presented algorithm does not stop after finite iterations, since the selected branching operation is the bisection of rectangle, then the branching process is exhaustive, i.e., the branching operation will ensure that the intervals of all variables are convergent to 0, i.e., ∥ul∥ → 0. From Theorem 2.2, as ∥ul∥ → 0, the optimal solution of the PLPRP will sufficiently approximate the optimal solution of the QP, and this ensures that lims→∞(UBsLBs) = 0, therefore the bounding operation is consistent. Since the subdivided rectangle which obtains the actual lower bound is selected for further branching operation at the later immediate iteration, therefore, the proposed selecting operation is bound improving. By Theorem IV.3 in Ref.[18], the presented algorithm satisfies that the branching operation is exhaustive, the bounding method is consistent and the selecting operation is improvement, i.e., the presented algorithm satisfies the sufficient condition for global convergence, so that the presented algorithm is globally convergent to the optimal solution of the QP.

## 4 Numerical experiments

Let the parameter matrix = (γjk)n×n ϵ Rn×n, where γjk ϵ {0, 1}, and the termination error ϵ = 10−6. Compared with the known algorithms, several test problems in literatures are run on microcomputer, and the program is coded in C++, all parametric linear programming relaxation problems are computed by simplex method. These test problems and their numerical results are given as follows. In Tables 1 and 2, we denote by “Iter." and “Time(s)" number of iteration and running time of the algorithm, respectively.

Problem 4.1 (Ref. [11]).

$minF0(y)=y1s.t.F1(y)=14y1+12y2−116y12−116y22≤1,F2(y)=114y12+114y22−37y1−37y2≤−1,1≤y1≤5.5,1≤y2≤5.5.$

Problem 4.2 (Refs. [4, 5, 6, 7, 8, 9, 10, 11, 12, ]).

$minF0(y)=y12+y22s.t.F1(y)=0.3y1y2≥1,2≤y1≤5,1≤y2≤3.$

Problem 4.3 (Ref. [11]).

$minF0(y)=y1y2−2y1+y2+1s.t.F1(y)=8y22−6y1−16y2≤−11,F2(y)=−y22+3y1+2y2≤7,1≤y1≤2.5,1≤y2≤2.225.$

Problem 4.4 (Refs. [4, 5, 6, 7, 8, 9, 10,]).

$minF0(y)=6y12+4y22+5y1y2s.t.F1(y)=−6y1y2≤−48,0≤y1,y2≤10.$

Problem 4.5 (Refs. [12, 13, ]).

$minF0(y)=y1s.t.F1(y)=4y2−4y12≤1,F2(y)=−y1−y2≤−1,0.01≤y1,y2≤15.$

Problem 4.6 (Refs. [10, 11, 12, 13, 14, 15, ]).

$minF0(y)=−4y2+(y1−1)2+y22−10y32s.t.F1(y)=y12+y22+y32≤2,F2(y)=(y1−2)2+y22+y32≤2,2−2≤y1≤2,0≤y2,y3≤2.$

Problem 4.7 (Ref. [14]).

$minF0(y)=−y1+y1y20.5−y2s.t.F1(y)=−6y1+8y2≤3,F2(y)=3y1−y2≤3,1≤y1,y2≤1.5.$
Table 1

Numerical comparisons for Problems 4.1–4.7

Problem 4.8 (Ref. [9]).

$minF0(y)=12〈y,Q0y〉+〈y,d0〉s.t.Fi(y)=12〈y,Qiy〉+〈y,di〉≤βi,i=1,…,m,0≤yj≤10,j=1,…,n.$

Each element of Q0 is randomly generated in [0, 1], each element of Qi(i = 1,∞, m) is randomly generated in [−1, 0], each element of d0 is randomly generated in [0, 1], each element of di(i = 1,∞, m) is randomly generated in [−1, 0], each element of βi(i = 1,∞, m) is randomly generated in [−300, −90], and each element of = (γjk)n×n ϵ Rn×n is randomly generated from 0 or 1.

We denote by n the dimension of our problem, our problem and by m the constraint number of our problem. Numerical results for Problem 4.8 are given in Table 2.

Table 2

Computational comparisons with Ref. [9] for Problem 4.8

Compared with the known algorithms, numerical experimental results of Problems 4.1–4.8 demonstrate that the proposed algorithm can globally solve the QP with the higher computational efficiency.

## 5 Concluding remarks

In this article, based on branch-and-bound framework, we present a new global optimization algorithm for solving the quadratic programs with quadratic constraints. In this algorithm, a new parametric linearization technique is derived. By utilizing the parametric linearization technique, we can derive the parametric linear programming relaxation problem of the QP. In addition, some interval reduction operations are proposed for improving the computational speed of the proposed branch-and-bound algorithm. The presented algorithm is convergent to the global optimal solution of the QP by subsequently partitioning the initial rectangle and by solving a sequence of parametric linear programming relaxation problems. Finally, numerical results show that the proposed algorithm has higher computational efficiency than those existent algorithms.

## Acknowledgement

This paper is supported by the Science and Technology Key Project of Henan Province (182102310941), the Higher School Key Scientific Research Projects of Henan Province (18A110019).

## References

• [1]

Shen P., Gu M., A duality-bounds algorithm for non-convex quadratic programs with additional multiplicative constraints. Appl. Math. Comput., 2008, 198: 1-11.

• [2]

Jiao H., Chen Y.-Q, Cheng W.-X., A Novel Optimization Method for Nonconvex Quadratically Constrained Quadratic Programs. Abstr. Appl. Ana., Volume 2014 (2014), Article ID 698489, 11 pages. Google Scholar

• [3]

Jiao H., Chen R., A parametric linearizing approach for quadratically inequality constrained quadratic programs. Open Math., 2018, 16(1): 407-419.

• [4]

Jiao H., Liu S., Lu N., A parametric linear relaxation algorithm for globally solving nonconvex quadratic programming. Appl. Math. Comput., 2015, 250: 973-985.

• [5]

Gao Y., Shang Y., Zhang L., A branch and reduce approach for solving nonconvex quadratic programming problems with quadratic constraints. OR transactions, 2005, 9(2):9-20. Google Scholar

• [6]

Jiao H., Liu S., An efficient algorithm for quadratic sum-of-ratios fractional programs problem. Numer. Func. Anal. Opt., 2017, 38(11): 1426-1445.

• [7]

Fu M., Luo Z.Q., Ye Y., Approximation algorithms for quadratic programming. J. Comb. Optim., 1998, 2: 29-50.

• [8]

Qu S.-J., Ji Y., Zhang K.-C., A deterministic global optimization algorithm based on a linearizing method for nonconvex quadratically constrained programs. Math. Comput. Model., 2008, 48: 1737-1743.

• [09]

Qu S.-J., Zhang K.-C., Ji, Y., A global optimization algorithm using parametric linearization relaxation. Appl. Math. Comput., 2007, 186: 763-771.

• [10]

Jiao H., Chen Y., A global optimization algorithm for generalized quadratic programming. J. Appl. Math., 2013, Article ID 215312, 9 pages. Web of Science

• [11]

Shen P., Jiao H., A new rectangle branch-and-pruning appproach for generalized geometric programming. Appl. Math. Comput., 2006, 183: 1027-1038. Google Scholar

• [12]

Wang Y., Liang Z., A deterministic global optimization algorithm for generalized geometric programming. Appl. Math. Comput., 2005, 168: 722-737. Google Scholar

• [13]

Wang Y.J., Zhang K.C., Gao Y.L., Global optimization of generalized geometric programming. Comput. Math. Appl., 2004, 48: 1505-1516.

• [14]

Shen P., Linearization method of global optimization for generalized geometric programming. Appl. Math. Comput., 2005, 162: 353-370. Google Scholar

• [15]

Shen P., Li X., Branch-reduction-bound algorithm for generalized geometric programming. J. Glob. Optim., 2013, 56(3): 1123-1142.

• [16]

Jiao H., Liu S., Zhao Y., Effective algorithm for solving the generalized linear multiplicative problem with generalized polynomial constraints. Appl. Math. Model., 2015, 39: 7568-7582.

• [17]

Jiao H., Liu S., Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comput. Appl. Math., 2017, 36(1): 225-247.

• [18]

Horst R., Tuy H., Global Optimization: Deterministic Approaches, second ed., Springer, Berlin, Germany, 1993. Google Scholar

Accepted: 2018-09-28

Published Online: 2018-11-10

Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 1300–1312, ISSN (Online) 2391-5455,

Export Citation