Skip to content
BY 4.0 license Open Access Published by Oldenbourg Wissenschaftsverlag March 25, 2022

A compendium of optimization algorithms for distributed linear-quadratic MPC

Ein Optimierungskompendium für die verteilte linear-quadratische modellprädiktive Regelung
Gösta Stomberg, Alexander Engelmann and Timm Faulwasser


Model Predictive Control (MPC) for {networked, cyber-physical, multi-agent} systems requires numerical methods to solve optimal control problems while meeting communication and real-time requirements. This paper presents an introduction on six distributed optimization algorithms and compares their properties in the context of distributed MPC for linear systems with convex quadratic objectives and polytopic constraints. In particular, dual decomposition, the alternating direction method of multipliers, a distributed active set method, an essentially decentralized interior point method, and Jacobi iterations are discussed. Numerical examples illustrate the challenges, the prospect, and the limits of distributed MPC with inexact solutions.


Die modellprädiktive Regelung vernetzter cyber-physischer Systeme erfordert numerische Verfahren, die Optimalsteuerungsprobleme in Echtzeit und unter Kommunikationseinschränkungen lösen. Dieser Beitrag vergleicht sechs Algorithmen für die verteilte prädiktive Regelung linearer Systeme mit konvex-quadratischer Zielfunktion und affinen Restriktionen hinsichtlich ihrer Konvergenzeigenschaften und ihrer Kommunikationsanforderungen. Es werden zwei Varianten der dualen Dekomposition, die Alternating Direction Method of Multipliers, eine verteilte Aktive-Mengen-Strategie, ein verteiltes Innere-Punkte-Verfahren und das Jakobi-Verfahren behandelt. Numerische Beispiele illustrieren das Potential, aber auch die Herausforderungen und Grenzen von verteilter prädiktiver Regelung mittels inexakter Lösung der Optimalsteuerungsprobleme.

1 Introduction

The formulation and the efficient numerical solution of Optimal Control Problems (OCP) is key for a variety of tasks in systems and control. In particular, research on control of {networked, cyber-physical, multi-agent} systems has led to manifold results on Distributed MPC (DMPC). One prominent approach to DMPC is to solve the OCPs in the MPC loop via iterative schemes in which agents communicate multiple times per MPC step [27], [32]. Further DMPC approaches (though not covered here) include sequential DMPC schemes and schemes where the physical coupling between agents is viewed as disturbance [27]. In general, different aspects of iterative DMPC are considered in the literature:

  1. The formulation of separable OCPs with local terminal constraints and costs [8], [10], [25].

  2. The design of tailored numerical algorithms, specifically the Jacobi iterations [11], [36], [39], Dual Decomposition (DD) [9], [19], the Alternating Direction Method of Multipliers (ADMM) [3], [6], [9], [31], and a recently proposed distributed Active Set Method (ASM) [37].

  3. The stability analysis of the closed loop under inexact optimization [3], [18], [25].

The third item is important since distributed optimization algorithms often exhibit slow asymptotic convergence and hence solving the OCPs to full accuracy might be prohibitive under strict communication requirements. Closed-loop stability under inexact optimization is analysed in [18], [25] for DD and in [3] for ADMM. The recent overview [5] discusses ADMM and a basic variant of DD for DMPC without numerical case studies.

In the present paper, we focus exclusively on deterministic optimization methods, i. e., we do not touch upon any heuristic schemes. Moreover, we assume perfect communication and we refer to [41] for an analysis including communication imperfection. Our compendium covers the Jacobi iterations, DD, a combination of DD with a Fast Gradient Method (DD-FGM), ADMM, ASM, and a recently proposed essentially decentralized Interior Point method (d-IP) [16]. Moreover, we illustrate the performance of the above algorithms on a numerical case study from formation control and on an adversarial example highlighting challenges in DMPC when using inexact solutions.

The paper is structured as follows: Section 2 provides the problem formulation as a partially separable quadratic program, Section 3 recalls optimization methods applicable to DMPC, Section 4 compares properties of the methods, and Section 5 applies the methods on two numerical examples.

Notation: The set I [ 0 , N ] denotes the integers from 0 to N. I n is the identity matrix in R n × n and ⊗ is the Kronecker product. The largest singular value of a matrix is written as σ max . Given a matrix A, [ A ] p denotes the pth row of A. Likewise, [ a ] p is the pth component of vector a. The concatenation of vectors into a column vector is col ( · ). A = [ A i j ] is the block matrix with entries A i j at block position ( i , j ).

2 Problem statement

Consider the discrete-time OCP

(1a) min x , u 1 2 i S J i ( x , u )
(1b) s. t. for all i S : x i k + 1 = j S A i j x j k + j S B i j u j k , k I [ 0 , N 1 ]
(1c) x i 0 = x i , 0
(1d) ( x i k , u i k ) X i × U i , k I [ 0 , N 1 ]
(1e) x i N X i f ,

J i ( x , u ) = j S ( x i N P i j x j N + k = 0 N 1 x i k Q i j x j k + u i k R i j u j k ) .

Here, S = { 1 , , | S | } is a set of linear time-invariant subsystems. The variables x i X i R n i , u i U i R m i denote the states and inputs of subsystem i S with corresponding constraints and terminal constraints X i f . Moreover, x i col k I [ 0 , N ] ( x i k ) , u i col k I [ 0 , N 1 ] ( u i k ) are shorthand for the predicted trajectories over the prediction horizon N. The matrices A i j R n i × n j and B i j R n i × m j describe the physical coupling between subsystems ( i , j ) S × S and the matrices Q i j R n i × n j , R i j R n i × m j , and P i j R n i × n j describe cost coupling.

The overall dynamics are

(2) x k + 1 = A x k + B u k , x 0 = col i S ( x i , 0 ) ,

where x col i S ( x i ) R n , u col i S ( u i ) R m , A = [ A i j ] R n × n , and B = [ B i j ] R m × m .

The DMPC schemes discussed subsequently execute three steps per control interval: (i) Each subsystem measures or estimates its current state x i , 0 ; (ii) the subsystems solve OCP (1) cooperatively to obtain u i 0 ; and (iii), each subsystem applies u i 0 .

Closed-loop stability can be guaranteed by the design of X i f and P i j . Most approaches in the literature require the following assumptions to hold.

Assumption 1 (Requirements for OCP (1)).

  1. The overall system ( A , B ) from (2) is stabilizable.

  2. The matrices Q = [ Q i j ] R n × n , P = [ P i j ] R n × n and R = [ R i j ] R m × m are positive definite.

  3. The constraint sets X i and U i are convex and closed polytopes that contain the origin in their interior.

  4. The sets X i f are closed and convex polytopes.

  5. For all x k X f , there exists u k U, such that x k + 1 X f and

    V f ( x k + 1 ) V f ( x k ) 1 2 x k Q x k 1 2 u k R u k ,

    where V f ( x ) = 1 2 x P x.

  6. OCP (1) is feasible at t = 0.

The numerical methods of Section 3 require that each subsystem communicates bidirectionally with all neighboring—i. e., interacting—subsystems. Hence, the neighbors of subsystem i are given by N i N i in N i out , where

N i in { j S X i j 0 , X { A , B , Q , R , P } } N i out { j S X j i 0 , X { A , B , Q , R , P } } .

Formulation of (1) as a partially separable problem

To apply distributed optimization methods, we reformulate OCP (1) as partially separable Quadratic Program (QP). To this end, we introduce trajectory copies

(3) v j i k = col ( x j k , u j k ) , i N , j N i in , k I [ 0 , N 1 ] .

This allows to write (1b) as

x i k + 1 = A i i x i k + B i i u i k + j N i in [ A i j , B i j ] v j i k i S .

The copies v i k col j N i in ( v j i k ) are additional decision variables of subsystem i. We write OCP (1) as

(4a) min z i , i S i S 1 2 z i H i z i + g i z i
(4b) s. t. C i E z i = b i E i S ,
(4c) C i I z i b i I i S ,
(4d) i S E i z i = b ,
where z i col ( x i , u i , v i ) R n z , i , v i = col j N i in ( v j i ), and v j i col k I [ 0 , N ] ( v j i k ). The matrices H i are composed of the weight matrices Q i j , R i j , and P i j . The equality constraints (4b) with C i E R n g , i × n z , i include the dynamics (1b). The inequalities (4c) with C i I R n h , i × n z , i describe the state and input constraints (1d). Depending on the type of terminal constraint, (1e) is part of (4b) or (4c). The parameters g i and b are zero for OCP (1). The constraints (4d) couple original and copied variables via (3). Due to the coupling of original and copied variables, the matrices E i R n c × n z , i , where n c denotes the number of coupling constraints, exhibit a sparse structure. This sparsity is crucial for solving the OCP in distributed fashion and it is exploited by all optimization methods covered in this article. We remark that, in principle, OCP (4) also allows for coupled state and input constraints in addition to the coupled dynamics and objectives.

For the sake of brevity, we rewrite QP (4) as

(5a) min z i Z i , i S i S ϕ i ( z i )
(5b) s. t. i S E i z i = b | λ ,
where Z i denotes the feasible set of subsystem i

Z i { z i R n z , i | C i E z i = b i E , C i I z i b i I } ,

ϕ i ( z i ) = 1 2 z i H i z i + g i z i is the local objective. The notation in (5b) highlights that λ is the Lagrange multiplier to (5b).

Example 1 (Problem formulation).

To illustrate the decoupling of dynamics via trajectory copies, consider the three subsystems

x 1 k + 1 = x 1 k + u 1 k , x 1 0 = x 1 , 0 , x 2 k + 1 = x 1 k + x 2 k , x 2 0 = x 2 , 0 , x 3 k + 1 = x 2 k + x 3 k , x 3 0 = x 3 , 0 .

Defining the copies v 2 x 1 and v 3 x 2 yields

x 1 k + 1 = x 1 k + u 1 k , x 1 0 = x 1 , 0 , x 2 k + 1 = v 2 k + x 2 k , x 2 0 = x 2 , 0 , x 3 k + 1 = v 3 k + x 3 k , x 3 0 = x 3 , 0 .

For N = 1, the decision variables in (4) are z 1 = x 1 0 x 1 1 u 1 0 , z 2 = x 2 0 x 2 1 v 2 0 , and z 3 = x 3 0 x 3 1 v 3 0 . The coupling is given by

1 0 0 0 0 0 z 1 + 0 0 1 1 0 0 z 2 + 0 0 0 0 0 1 z 3 = 0 .

Remark 1 (Decentralized and distributed optimization).

Throughout the paper we distinguish distributed, decentralized, and essentially decentralized algorithms. We call a method decentralized, if the subsystems communicate only between neighbors. We refer to a method as distributed, if in addition to neighbor-to-neighbor communication it requires a limited amount of centralized coordination, computation, and communication, e. g., to solve a centralized coordination optimization problem. We denote a method as essentially decentralized, if there is no central computation but limited global communication—i. e., communication between all agents—, which is small compared to the communication between neighbors.

Remark 2 (Stability under inexact optimization).

To limit communication and computation, optimization algorithms are often terminated early leading to a suboptimal control input. Closed-loop stability for suboptimal control inputs can be guaranteed via appropriate terminal ingredients P i and X i f , i. e., feasibility of the overall system implies stability [33]. For instance, one can choose a zero terminal constraint X i f = { 0 } or design local terminal sets X i f [8]. Feasibility of suboptimal solutions can be due to the optimization method (e. g., feasibility of all iterates) or it can be enforced via tightened constraint sets. Dual optimization methods such as DD and ADMM, however, achieve primal feasibility only asymptotically. For these methods stability under inexact optimization can be guaranteed by appropriate stopping criteria [3], [18], [25].

In case one resorts to the straight-forward choice X i f = { 0 } (e. g., in Section 5) it is necessary to require that the aggregated system (2), i. e., the pair ( A , B ), is controllable.

3 Algorithms for DMPC

3.1 Dual decomposition

The Lagrangian of (5) is defined as

L ( z , λ ) = ( i S ϕ i ( z i ) + λ E i z i ) λ b .

The dual problem to (5) is then given by

(6) max λ q ( λ ) ,

where q ( λ ) = min z i Z i L ( z , λ ) is called the dual function. Dual decomposition applies gradient-based methods to (6). By Assumption Assumption 1, the objectives ϕ i ( z i ) are strongly convex with convexity parameter μ, and the local constraint sets Z i are closed and convex. This ensures continuous differentiability of q ( λ ) and hence

q ( λ ) = ( i S E i z i ) b ,

where z i = argmin z i Z i ϕ i ( z i ) + λ E i z i , see [1, Prop. 6.1.1]. Applying the gradient method with step size c to (6) yields

λ l + 1 = λ l + c q ( λ l ) .

Convergence to the optimum is then guaranteed for any step size c ( 0 , 2 μ / E 2 2 ), where

E = [ E 1 , , E | S | ] ,

see [1, Prop. 1.2.3]. The iterations read

(7a) z i l = argmin z i Z i ϕ i ( z i ) + λ l E i z i
(7b) λ l + 1 = λ l + c ( i S E i z i l b ) .
DD appears to be a distributed method due to update (7b). However, one can easily obtain a decentralized version as follows. The iterations (7) imply

z i l + 1 = argmin z i Z i ϕ i ( z i ) + ( λ l + c j S z j l E j b ) E i z i .

If j N i { i }, then E j E i = 0 and hence

z i l + 1 = argmin z i Z i ϕ i ( z i ) + ( λ l + c j N i { i } z j l E j b ) E i z i .

Next, we introduce a local variable λ i R n c for each subsystem such that

(8) λ E i = λ i E i , i S .

If j N i { i }, then E j E i is sparse and only the components

(9) [ z j l ] p p { I [ 0 , n z , j ] [ E j E i ] p 0 }

of subsystem j are required to compute λ i l + 1 . With this we finally obtain DD in decentralized form as summarized in Algorithm 1. If (5) is obtained by copying variables as described in Section 2, then (8) implies

[ λ i ] p = [ λ ] p p { I [ 0 , n c ] [ E i ] p 0 } , i S .

Algorithm 1 
Decentralized DD [5].

Algorithm 1

Decentralized DD [5].

3.2 Dual decomposition with fast gradients

Instead of the gradient method, we can also apply a fast gradient method to solve the dual problem (6). This gives

λ l + 1 = λ ¯ + 1 L q ( λ ¯ ) α l + 1 = α l 2 ( ( α l ) 2 + 4 α l ) β l = α l ( 1 α l ) ( α l ) 2 + α l + 1 λ ¯ l + 1 = λ l + 1 + β l ( λ l + 1 λ l ) ,

where L is a Lipschitz constant of q ( λ ) [9], [30] and β is the step size. To obtain fast convergence, we choose the smallest Lipschitz constant L of q ( λ ), i. e.,

L = σ max ( E H 1 / 2 ) 2 ,

where H = blkdiag ( H 1 , , H | S | ), see [30]. Again, we introduce local versions λ i R n c and use the sparsity of E i to arrive at the decentralized version in Algorithm 2.

Algorithm 2 
DD with fast gradients [9].

Algorithm 2

DD with fast gradients [9].

3.3 Alternating direction method of multipliers

Distributed ADMM requires a different problem formulation for which we introduce the auxiliary decision variable z ¯ i R n z , i for each subsystem. We assume that b = 0 and reformulate QP (5) as

(10a) min z i Z i , z ¯ i , i S i S ϕ i ( z i )
(10b) s.t. z i z ¯ i = 0 , i S | ν i
(10c) i S E i z ¯ i = 0 ,
where ν i R n z , i are Lagrange multipliers associated to (10b). The augmented Lagrangian of (10) is given by

L ρ ( z , z ¯ , ν ) = i S L ρ , i ( z i , z ¯ i , ν i ) = i S ϕ i ( z i ) + ν i ( z i z ¯ i ) + ρ 2 z i z ¯ i 2 2 ,

where ρ R + is called penalty parameter. Let z ¯ = col i S ( z ¯ i ) and let E = { z ¯ R n z | i S E i z ¯ i = 0 } denote the feasible set of the coupling constraints (10c). ADMM updates the decision variables z i and z ¯ i in alternating fashion with iterations

(11a) z i l + 1 = argmin z i Z i L ρ , i ( z i , z ¯ i l , ν i l ) ,
(11b) z ¯ l + 1 = argmin z ¯ E i S L ρ , i ( z i l + 1 , z ¯ i , ν i l ) ,
(11c) ν i l + 1 = ν i l + ρ ( z i l + 1 z ¯ i l + 1 ) .
At each iteration, the variables z i satisfy the local constraints and the variables z ¯ i satisfy the coupling constraints. Upon convergence, z i = z ¯ i such that all z i are primal feasible, i. e., they satisfy (10b).

From the iterations (11) it is not yet clear why ADMM is a decentralized method as the update (11b) seemingly requires global coordination to satisfy the coupling constraints. However, for QP (10), the update can be replaced by an averaging step under suitable conditions on E i [4, p. 55]. These conditions are intrinsically satisfied if (4) is constructed using the procedure via copied trajectories as described in Example 1. To decentralize ADMM, one solves the corresponding KKT system analytically z ¯ l + 1 = M ( z l + 1 + ν / ρ ), where M = ( I n z E ( E E ) 1 E ), z = col i S ( z i ), and ν = col i S ( ν i ). Then, each row [ E ] k has precisely two non-zero elements: one element is +1 and corresponds to the original variable, and the other element is −1 and corresponds to the copy. Multiplication by M is an averaging step. If the Lagrange multipliers satisfy

(12) M ν = 0 ,

then the update of element [ z ¯ ] p is given by the average of the original and copied variables that represent the same physical variable as [ z ¯ ] p . Since ν i l + 1 satisfies (12) for all l 1 by (11) [4, p. 55], we only have to enforce (12) upon initialization if we wish to perform (11b) via the described averaging step. Furthermore, one may warm-start ADMM with the Lagrange multipliers obtained at the previous MPC step. Algorithm 3 summarizes the decentralized ADMM variant.

Algorithm 3 
Decentralized ADMM [31].

Algorithm 3

Decentralized ADMM [31].

3.4 Distributed active set method

Algorithm 4 
Distributed ASM [37].

Algorithm 4

Distributed ASM [37].

Next, we present a distributed active set method from [37]. In QP (4) the active set of agent i is

A ( z i ) { j { 1 , , n h , i } [ C i I ] j z i = [ b i I ] j } .

Starting from a feasible initialization z i 0 , the agents take steps z i l + 1 = z i l + α l Δ z i l , where the step size α l is the same for all agents. The directions Δ z i l are obtained solving

(13a) min Δ z i , i S i S 1 2 Δ z i H i Δ z i + f i l Δ z i
(13b) s.t. C i l Δ z i = d i , i S | γ i
(13c) i S E i Δ z i = 0 , | λ
where C i l C i E ( col j A ( z i l ) ( [ C i I ] j ) ) , f i l H i z i l + g i , d i = 0, and γ i and λ are Lagrange multipliers.

Let Z i be a matrix where the columns of Z i form a basis of the nullspace of C i l and let Y i be a matrix where the columns of Y i form a basis of the range space of C i l . Algorithm 4 summarizes ASM.

Step 2 in Algorithm 4 first eliminates (13b), i. e., condenses (13), via the null space basis Z i and then computes the Schur complement S i and s i . Step 3—inspired by the bi-level distributed algorithm from [15]—is the key idea of Algorithm 4: solve (13) with an essentially decentralized version of the Conjugate Gradient method (d-CG) [13]. The step directions Δ z i are obtained via back substitution in Step 4. In Step 7 each agent checks for dual feasibility, if the step directions satisfy Δ z i l < ε for a chosen threshold ε. The method stops if dual feasibility is achieved. Else, the active inequality constraint corresponding to the most negative Lagrange multiplier among all agents is removed from the active set (Step 11) and new step directions Δ z i are computed. If Δ z i l ε for some agents, then in Step 15 every agent computes the largest step size α i l ( 0 , 1 ] which retains primal feasibility [29]

(14) α i l min { 1 , min j A ( z i l ) , [ c i I ] j Δ z i l < 0 [ b i I ] j [ c i I ] j z i l [ c i I ] j Δ z i l } .

If α l < 1, the respective blocking constraint is added to the blocking agents’ active set in Step 17.

To complete the presentation of ASM, we recall d-CG as the key component in Step 3 of Algorithm 4.

Algorithm 5 
d-CG [13].

Algorithm 5

d-CG [13].

Essentially decentralized conjugate gradients

Consider the system of linear equations S λ = s, where S i S S i R n c × n c is positive definite, s i S s i and where the matrices S i from Step 2 in Algorithm 4 inherit the sparse structure from the coupling matrices E i . The essentially decentralized conjugate gradient method solves this system of equations iteratively in at most n c steps [13]. To exploit the sparsity in S i , define the set of constraints assigned to subsystem i S by C ( i ) = { j { 1 , n c } [ E i ] j 0 }. Moreover, introduce matrices that map from global to local constraints I C ( i ) col j C ( i ) ( e j ) R n c , i × n c , where e j R n c is the jth unit vector. Algorithm 5 summarizes d-CG, with Λ i S I C ( i ) I C ( i ) , Λ i I C ( i ) Λ I C ( i ) , S ˆ i I C ( i ) S i I C ( i ) and I i j I C ( i ) I C ( j ) . d-CG iteratevly updates the residual r i , the step direction p i , and the stepsizes σ and η. d-CG requires two types of communication: Step 9 requires neighbor-to-neighbor communication of equal amount as DD and ADMM. Steps 3, 7, and 11 require the summation of the scalars σ i and η i over the entire network rendering d-CG an essentially decentralized algorithm. For a convergence proof and further details on d-CG, see [13].

3.5 Essentially decentralized interior point method

Next, we summarize the essentially decentralized interior point method (d-IP) from [16]. We reformulate the inequality constraints (4c) by a logarithmic barrier function, which yields

(15a) min z i , w i , i S i S ϕ i ( z i ) 1 δ ln ( w i )
(15b) s. t. C i E z i = b i E , i S | γ i
(15c) C i I z i + w i = b i I , i S | μ i
(15d) w i 0 , i S
(15e) i S E i z i = b | λ
with 1 = ( 1 , , 1 ) R n h , i , slack variables w i R n h , i , and Lagrange multipliers γ i , μ i , and λ. Problem (15) is solved for a decreasing sequence of the barrier parameter δ > 0. In the limit δ 0, the problems (4) and (15) are equivalent. Here, we consider a specific variant of interior point methods that applies one Newton step to (15) and then updates δ [7].

A Newton step for (15) reads

(16) F 1 δ 0 E ˜ 1 0 F 2 δ E ˜ 2 E ˜ 1 E ˜ 2 0 Δ p 1 Δ p 2 Δ λ = F 1 δ F 2 δ b i S E i z i

with Δ p i = ( Δ z i , Δ w i , Δ γ i , Δ μ i ). We have

F i δ ( p i , λ ) = H i z i + g i + C i E γ i + C i I μ i + E i λ δ W i 1 1 + μ i C i E z i b i E C i I z i + w i b i I ,

W i = diag ( w i ),

F i δ = H i 0 C i E C i I 0 W i 1 M i 0 I C i E 0 0 0 C i I I 0 0 ,

M i = diag ( μ i ), and E ˜ i = E i 0 0 0 . In case F i δ is invertible, we obtain from (16)

(17) i S S i Δ λ = i S s i ,


(18a) S i E ˜ i ( F i δ ) 1 E ˜ i ,
(18b) s i E ˜ i ( F i δ ) 1 F i δ + E i z i 1 | S | b .
To obtain an essentially decentralized algorithm, we solve (17) with d-CG from Subsection 3.5 [13]. Similar to the ASM, d-IP can be understood as a bi-level distributed algorithm [15]. Notice that early termination of d-CG allows to reduce communication. Consider the residual r λ l = S l Δ λ λ l s l of (17). Convergence of d-IP can be ensured by specifying the d-CG stopping criterion r λ l c 1 ( δ l ) η , with parameters c 1 > 0 and η > 1. The barrier parameter δ is updated via a rule based on the complementarity condition

(19) δ l + 1 = max i S δ i l + 1 with δ i l + 1 = θ ( v i l μ i l n h , i ) 1 + γ

for parameters γ > 0 and θ close to 1. We ensure that the Newton step does not leave the domain of the barrier function ln ( · ) via the fraction-to-the-boundary rule for the primal and dual step sizes α p and α d ,

α p , l = min i S α i p , l , α d , l = min i S α i d , l ,


(20a) α i p , l = min ( τ min Δ [ w i l ] n < 0 ( [ w i l ] n Δ [ w i l ] n ) , 1 ) ,
(20b) α i d , l = min ( τ min Δ [ μ i l ] n < 0 ( [ μ i l ] n Δ [ μ i l ] n ) , 1 ) ,
and τ = 1 ( δ l ) β with β > γ [29].

Algorithm 6 
d-IP [16].

Algorithm 6

d-IP [16].

Summing up, the updates for all primal and dual variables are given by

(21a) z i l + 1 = z i l + α p Δ z i l , μ i l + 1 = μ i l + α d Δ μ i l ,
(21b) w i l + 1 = w i l + α p Δ w i l , γ i l + 1 = γ i l + α d Δ γ i l .
The overall d-IP method is stated in Algorithm 6, where ε is a user-specified threshold, F δ ( p l ) is the right hand side of (16), and p i = ( z i , w i , γ i , μ i ).

3.6 Jacobi iterations

Next, we summarize an approach based on the parallel Jacobi iterations from [2, Chapter 3]. First DMPC schemes based on Jacobi iterations were reported in [38], [39] and later in [35], [36]. The overlapping variant considered here is from [11].

Table 1

Comparison of distributed optimization methods for MPC.

Method Category Assumptions on QP (5) Proven convergence rate Feasible iterates Known guarantees for non-convex OCPs Further reading
DD decentralized strongly convex sublinear O ( 1 / k ) no no [5], [25]
DD-FGM decentralized strongly convex sublinear O ( 1 / k 2 ) no no [9], [18], [19], [30]
ADMM decentralized convex sublinear O ( 1 / k ) no special cases only [4], [17], [21]
decentralized strongly convex linear no [3], [6], [24], [31]
ASM ess. decentralized strictly convex finite-time yes no [37]
d-IP ess. decentralized strongly convex superlinear no yes [16]
Jacobi decentralized convex yes no [11], [20], [35], [36], [38], [39]

Consider the centralized OCP (1) with X i f = 0 for all i S. Subsequently and in contrast to the other methods, we differ from the given notation in (4). Let z = col ( x , u ) denote the state and input trajectories over the horizon for the overall system (2). That is, in contrast to (4), the variable z now does not contain copies of state or input trajectories. We rewrite OCP (1) as

(22a) min z 1 2 z H z + g z
(22b) s. t. C E z = b E ,
(22c) C I z b I .
In Jacobi iterations, each agent minimizes (22) with respect to its individual decision variables and subsequently coordinates with its neighbors. The assignment of decision variables to agents is a design choice and influences the performance of the method. Here, we choose an overlapping decomposition as in [11], where each agent is assigned the subset z i of the decision variables z that contains its own state and input trajectories as well as the state and input trajectories of its directly interacting neighbors: z i col ( x j , u j ) j N i { i } . We stack all other elements of z in z i . The problem for agent i then reads
(23a) min z i ϕ ( z i , z i )
(23b) s. t. C i E z i + C i E z i = b i E ,
(23c) C i I z i + C i I z i b i I ,
where (23b), (23c) form the non-zero rows of (22b) and (22c) for the components of z i , and ϕ ( z i , z i ) = 1 / 2 z H z + g z. Starting from a feasible guess z 0 , the subsystems complete the iterations given in Algorithm 7, where the weights ω i are designed such that

(24) ω i 0 , i S ω i = 1 .

Step 3 of Algorithm 7 can be executed in parallel by each subsystem. Step 4 merges the trajectories z i of all subsystems j N i { i } with the trajectories z i l of all subsystems j N i { i } to obtain the ith subsystem’s guess for the trajectories of the entire network z i l . The Jacobi iterations can be decentralized, if each subsystem performs the summation in Step 5 for its own trajectories col ( x i , u i ) and then sends the result to its neighbors.

Algorithm 7 
Jacobi iterations [11].

Algorithm 7

Jacobi iterations [11].

4 Comparison and discussion

Now we compare the algorithms in terms of convergence properties, assumptions, and implications for DMPC. The main insights are summarized in Table 1.

4.1 Dual decomposition

Assumptions, convergence, feasibility, and termination

Differentiability of the dual function can be ensured if the primal objective is strongly convex and the feasible set is compact [1, Prop. 6.1.1]. In this case, the root convergence rate in the dual variables is sublinear ( O ( 1 / k )) for DD and O ( 1 / k 2 ) for DD-FGM. The iterates z i l satisfy the local constraints, but feasibility with respect to the coupling constraints is only achieved asymptotically. The methods may be terminated, if the coupling constraint violation q ( λ ) is sufficiently small.

Closed-loop stability

The satisfaction of the local constraints at any iteration can be exploited if the agents are coupled only via the cost in OCP (1) and not via the dynamics or the state or input constraints. In this case, if P i j and X i f are designed such that feasibility with respect to Z i implies stability, then the methods may be terminated after any iteration and yield a stabilizing input. However, if the coupling also occurs in the dynamics or state/input constraints, additional measures have to be taken as the coupling constraints are only satisfied asymptotically. For DD with proximal gradient methods, the following results exist. A stopping criterion to guarantee stability despite infeasibility is given in [18]. An a priori upper bound on the required iterations to guarantee stability when using polytopic terminal constraints is presented in [25].

Warm-starting and communication

DD and DD-FGM can be warm-started with λ i l obtained in the last MPC step to accelerate convergence. The methods are decentralized and require to communicate 2 n c floats in total per iteration.

4.2 ADMM

Assumptions, convergence, feasibility, and termination

ADMM requires less restrictive assumptions than DD. For general convex problems—i. e., neither Lipschitz continuity nor strong convexity is required—ADMM is guaranteed to converge at a sublinear rate ( O ( 1 / k )) for all values of ρ > 0 [21, Thm. 4.1]. If the problem is strongly convex, linear convergence can be shown [17, Cor. 2]. Convergence guarantees of ADMM for special classes of non-convex problems are given in [40], but guarantees for OCPs with nonlinear dynamics are yet unavailable. As for DD, the iterates z i l satisfy the local constraints at any iteration, but the coupling constraints are only satisfied asymptotically. The method may be terminated if both the primal residuals r i l = z i l z ¯ i l and the dual residuals s i l = ρ z i l z i l 1 are sufficiently small, see [3], [4].

Closed-loop stability

Feasibility with respect to the local constraints can be exploited if the coupling occurs only in the cost function similar to DD. For general coupling setups, a stopping criterion to guarantee closed-loop stability under inexact optimization with ADMM is derived in [3].

Warm-starting and communication

ADMM can be warm-started with the solutions z ¯ i l and γ i l obtained in the last MPC step. Executing the update (11b) via the averaging step as in Algorithm 3 renders ADMM a decentralized method. This is well-known in the context of distributed MPC [3], [9], [31]. This decentralized ADMM implementation requires to communicate 2 n c floats in total per iteration.

4.3 Distributed ASM

Assumptions, convergence, feasibility, and termination

To perform the condensing in Step 2 of Algorithm 4, ASM requires that H ¯ i is invertible. To apply d-CG, the coefficient matrix S must be positive definite, which can be ensured by choosing H i positive definite [37]. Assuming no cycling occurs, ASM terminates in a finite number of iterations. Notably, ASM is feasible-side convergent as all iterates z i l satisfy the local and coupling constraints. This can be exploited to guarantee closed-loop stability under early truncation [33].

Feasible initialization

The initial iterates z i 0 must satisfy all constraints. In general, this is difficult to ensure by distributed computation. However, if the OCP contains only input box constraints, one can compute a feasible initialization for the ASM as follows: First, suppose that no input constraints are active. Then, solve (13) with d i = b i E and thus obtain a guess for z i 0 . If input constraints are violated, then add them to the active set and repeat until z i 0 is feasible.

Warm-starting and communication

The method can be warm-started with the solution z i l from the last MPC step and the corresponding active set. Unlike DD and ADMM, the active set method requires global communication in Steps 10 and 16 and to run d-CG in Step 3. d-CG communicates two floats globally and 2 n c floats from on neighbor-to-neighbor per iteration.

4.4 Essentially decentralized IP

Assumptions, convergence, feasibility, and termination

Due to its Newton-type nature, d-IP notably exhibits superlinear local convergence for non-convex problems under standard regularity assumptions [16, Thm. 2].[1] By replacing the barrier parameter update rule with a more aggressive one, even quadratic convergence rates are possible [7]. The iterates z i l satisfy the local and coupling constraints asymptotically. It is straightforward to see that the evaluation of the termination criterion in Algorithm 6 can be realized via additional neighbor-to-neighbor communication efforts. In practice, one can also limit the number of iterations or terminate d-IP early if the progress stalls. The decentralized evaluation of the termination criterion, however, warrants further investigation.

Warm-starting and communication

There is no efficient warm-starting procedure for d-IP available yet. As for ASM, global communication of scalars is required in Step 6 of Algorithm 6 and in d-CG as the inner algorithm. d-CG requires to communicate 2 n c floats in total per iteration between neighbors.

4.5 Jacobi iterations

Assumptions, convergence, feasibility, and termination

If OCP (1) is convex, Jacobi iterations converge to a fixed point [11]. However, convergence to a stationary point or to a minimizer is not guaranteed. The iterates z i l satisfy all constraints. A possible termination condition is that the iterates are sufficiently close to a fixed point, i. e., z l + 1 z l ε.

Warm-starting and communication

Jacobi iterations require a feasible initialization z i 0 for each agent. If there is no plant-model mismatch and there are no disturbances, then the method can be initialized with the solution from the previous MPC step. However, this is in general not the case in practice and renders the application difficult. In total, each iteration requires to communicate 2 n c floats between neighbors.

4.6 Methods not discussed

Due to space limitations this compendium has not touched upon several methods which have seen application in the context of distributed optimization and DMPC. That is, we have not included block-coordinate descent schemes [28], the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) method [22], and its communication efficient bi-level distributed variant [15]. While ALADIN is originally tailored to non-convex settings, it has seen first applications to convex DMPC problems [23]; an open-source implementation of ALADIN is available [14]. We remark that bi-level ALADIN [15] already included an early variant of d-CG, a later variant of which is used in the ASM and d-IP schemes [16], [37]. Besides ADMM, there exist further proximal methods suited for distributed optimization and DMPC, see [26], [34].

5 Numerical examples

This section serves two purposes: (i) We illustrate the prospect of early termination in the different algorithms considering a robot formation control problem, and (ii) we present an adversarial example which highlights the potential risks of early termination. For both examples, we design a stabilizing centralized MPC scheme with a zero terminal constraint, we formulate the OCP as a strongly convex QP, and we investigate the effects of early termination on the closed-loop system. For numerical results of DD and ADMM for randomly generated QPs we refer to [9], [18].

Simulation setup

In the following, we are interested in comparing the inner (i. e., d-CG) iterations for the bi-level algorithms ASM and d-IP. We cold-start all methods at MPC iteration k = 0 except for the Jacobi iterations, where we solve a modification of (22) with H = 0 and g = 0 at k = 0 with a centralized QP solver. We warm-start the methods for MPC iterations k 1 with the solution obtained in the last step shifted appropriately. We furthermore tune the methods as follows and choose the parameters that yield the fastest convergence for the OCP at t = 0. For DD, we choose c [ 0.1 · 2 μ / E 2 2 , 0.99 · 2 μ / E 2 2 ]. For DD-FGM, we set L = E H 0.5 2 2 as in [30]. For ADMM, we tune ρ [ 10 3 , 10 3 ]. For d-IP, we fix ( c 1 , γ , β , η , ε ) = ( 1 , 1.01 , 2 , 1.01 , 10 7 ) and tune θ [ 0.1 , 0.9 ] and δ 0 [ 0.1 , 1 ]. For JI, we adjust ω i [ 0.1 , 0.9 ]. For ASM, we set ε = 10 6 and r < 10 8 for d-CG. Moreover, we initialize ASM as described in Subsection 4.3 and we remark that this initialization is not guaranteed to succeed as the examples also include state and terminal constraints. Furthermore, there is no plant-model mismatch such that any loss in performance is only due to inexact optimization.

5.1 Formation control of mobile robots

We consider four mobile robots that move in the ( x , y )-plane with dynamics

x i k + 1 = x i k + T u i , x i 0 = x i , 0 , i { 1 , , 4 } ,

where x i R 2 is the position, u i R 2 is the velocity and T = 0.2 is the sampling interval of the controller. The velocity is limited to col ( 2 , 2 ) u i col ( 2 , 2 ). The task is to steer the robots from an initial formation x 0 R 8 into a desired formation x d R 8 . The centralized OCP has the objective 1 2 k = 0 N ( x k x d ) Q ( x k x d ) + ( u k u d ) R ( u k u d ), horizon length N = 20, and a zero terminal constraint X f = { x d }.[2] We choose R = I 8 and follow the approach from [12] to design Q = C p D C p with

C p = 1 4 1 1 1 1 3 1 1 1 1 3 1 1 1 1 3 1 1 1 1 3 I 2 D = diag ( 0.1 , 10 , 10 , 10 , 10 , 10 ) I 2 .

Figure 1 
Convergence to the OCP minimizer at 
0t=0. Top: robot example. Bottom: adversarial example.

Figure 1

Convergence to the OCP minimizer at t = 0. Top: robot example. Bottom: adversarial example.

Figure 2 
Robot example: closed-loop trajectories in the 
)(x,y)-plane and corresponding maximum number of iterations per MPC step in parentheses. Left: performance similar to centralized MPC. Right: performance worse than centralized MPC.

Figure 2

Robot example: closed-loop trajectories in the ( x , y )-plane and corresponding maximum number of iterations per MPC step in parentheses. Left: performance similar to centralized MPC. Right: performance worse than centralized MPC.

Since the coupling occurs only in the objective and not in the constraints and as we employ a terminal constraint, DD and ADMM can be terminated at any iteration and still yield a stabilizing controller. Figure 1 shows the convergence of the respective methods to the minimizer k = 0, where we tuned ADMM by choosing ρ = 1. Compared to ASM and d-IP, ADMM converges slowly in z, but is quick to achieve a modest accuracy in u 0 . Figure 2 illustrates how suboptimal inputs can achieve good control performance on this example. The results for DD are omitted in Figure 2 because of the slow convergence observed in Figure 1.

5.2 An adversarial example

Consider three first-order systems

x ˙ 1 ( t ) = u 1 ( t ) , x 1 ( 0 ) = 1 x ˙ 2 ( t ) = x 1 ( t ) + 4 x 2 ( t ) , x 2 ( 0 ) = 2 x ˙ 3 ( t ) = x 2 ( t ) + 4 x 3 ( t ) , x 3 ( 0 ) = 3

with constraints 1000 u 1 1000 and 200 x i 200. This example is challenging as agents two and three are not stabilizable individually even though the aggregated overall system is controllable. We choose the sampling interval T = 0.040 and discretize the dynamics with exact zero-order hold. We design the centralized OCP with N = 50, Q i = 10, R = 1, and a zero terminal constraint.

Figure 1 (bottom) shows the convergence to the unique minimizer at MPC iteration k = 0. Despite large residuals in the overall decision variable z, ADMM and DD-FGM converge to the optimal input u 0 after 250 and 530 iterations, respectively. In contrast, ASM attains the minimum in about 100 iterations, while d-IP gives very promising results in z for 250 iterations.

Figure 3 shows the distance to the origin of the centralized state vector after running the controller for k f = 125 MPC steps (i. e., t f = 5) versus the maximum number of iterations taken per MPC step. The plot thus illustrates how many iterations per MPC step a method requires to stabilize the system starting at the considered initial conditions. We remark that standard DD requires approximately 10 4 iterations per MPC step to achieve closed-loop convergence. It is hence omitted in the plot.

Figure 3 
Adversarial example: closed-loop convergence versus maximum number of iterations per MPC step.

Figure 3

Adversarial example: closed-loop convergence versus maximum number of iterations per MPC step.

Figure 4 shows the closed-loop trajectories for a selected set of simulations using ADMM, d-IP, and ASM while limiting the number of iterations per MPC step. 70 ADMM iterations per MPC step suffice to achieve convergence, see Figure 3. However, the trajectory of x 3 in Figure 4 indicates that the performance is far worse than for ASM and d-IP which achieve almost centralized MPC performance.

Figure 4 
Adversarial example: closed-loop trajectories for ASM, d-IP and ADMM after early termination.

Figure 4

Adversarial example: closed-loop trajectories for ASM, d-IP and ADMM after early termination.

This example allows a couple of interesting observations: Due to the lack of individual controllability of the subsystems, a slight violation of the consensus constraint (5b) leads to instability of the closed loop. As shown in Figure 4, ADMM applied with less than 70 iterations per MPC step does not stabilize. This stability loss occurs despite nominal convergence guarantees of the optimization methods and despite feasibility of the predicted state and input trajectories with respect to the local constraints (4b)–(4c). The instability is due to the infeasibility of the ADMM iterations with respect to the coupling constraints (4d).

Indeed, this adversarial example is constructed to highlight the lack of robustness which is induced by the combination of early truncation in DMPC and by a lack of local stabilizability/controllability properties. In essence, this emphasizes that local feasibility does not necessarily imply stability. Instead, local feasibility and coupling feasibility together—under appropriate assumptions on OCP (4a)—imply stability [33].

Finally, while the Jacobi iterations show very promising performance in view of Figure 3, the substantial pitfalls of this method are its need for feasible initialization (here with the centralized optimal solution) and by the lack of suitable optimality-upon-convergence properties. Indeed, for the considered example the Jacobi method converges to z in a single iteration as each agent solves the entire OCP in Step 3 of Algorithm 7.

This example illustrates that the promising results of [6]—which reports that already five ADMM iterations per MPC step can suffice to stabilize a set of three Van der Pol oscillators—might be affected if subsystems are not controllable. However, it also stands to reason that ADMM has a certain robustness. We remark that one unique advantage of ASM and d-IP over ADMM is that no optimization solver is needed for the agents. Indeed, embedded implementations might benefit from this feature.

6 Conclusions

This paper has reviewed six distributed or decentralized algorithms for distributed model predictive control of linear system subject to convex objectives and polytopic constraints. Specifically, we reviewed two variants of dual decomposition, ADMM, a distributed active set method, an essentially decentralized interior point method, and Jacobi iterations. We compared the methods in terms of their convergence properties, communication effort, and structure. Simulation results for a formation control problem illustrate the convergence properties of the algorithms. Moreover, we presented an adversarial example to highlight the difficulties that can arise due to the path infeasibility (i. e., asymptotic convergence towards feasibility) of dual methods and to showcase the advantages of methods with faster asymptotic convergence. Benchmarking of the covered methods on further problems is subject to future work.


1. Bertsekas, D.P. 1999. Nonlinear Programming. Athena Scientific, Belmont.Search in Google Scholar

2. Bertsekas, D.P. and Tsitsiklis, J.N. 1989. Parallel and Distributed Computation: Numerical Methods, vol. 23. Prentice Hall, Englewood Cliffs, NJ.Search in Google Scholar

3. Bestler, A. and K. Graichen. 2019. Distributed model predictive control for continuous-time conlinear systems based on suboptimal ADMM. Optimal Control Appl. Methods 40(1): 1–23.10.1002/oca.2459Search in Google Scholar

4. Boyd, S., N. Parikh, E. Chu, B. Peleato and J. Eckstein. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1): 1–122.10.1561/9781601984616Search in Google Scholar

5. Braun, P. and L. Grüne. 2018. Verteilte Optimierung: Anwendungen in der Modellprädiktiven Regelung Automatisierungstechnik 66(11): 939–949.10.1515/auto-2018-0009Search in Google Scholar

6. Burk, D., A. Völz and K. Graichen. 2021. A modular framework for distributed model predictive control of nonlinear continuous-time systems (GRAMPC-D), Optim. Eng. 1–25.10.1007/s11081-021-09605-3Search in Google Scholar

7. Byrd, R.H., G. Liu and J. Nocedal. 1997. On the local behavior of an interior point method for nonlinear programming. Numer. Anal. 37–56.Search in Google Scholar

8. Conte, C., C.N. Jones, M. Morari and M.N. Zeilinger. 2016. Distributed synthesis and stability of cooperative distributed model predictive control for linear systems. Automatica 69: 117–125.10.1016/j.automatica.2016.02.009Search in Google Scholar

9. Conte, C., T. Summers, M.N. Zeilinger, M. Morari and C.N. Jones. 2012. Computational aspects of distributed optimization in model predictive control. In: Proc. 51st IEEE Conference on Decision and Control, pp. 6819–6824.Search in Google Scholar

10. Costantini, G., R. Rostami and D. Görges. 2018. Distributed linear quadratic regulator for the synthesis of a separable terminal cost for distributed model predictive control. In: Proc. 57th IEEE Conference on Decision and Control, pp. 5170–5175.Search in Google Scholar

11. Doan, M.D., M. Diehl, T. Keviczky and B. De Schutter. 2017. A jacobi decomposition algorithm for distributed convex optimization in distributed model predictive control. IFAC-PapersOnLine 50(1): 4905–4911.Search in Google Scholar

12. Ebel, H. and P. Eberhard. 2021. A comparative look at two formation control approaches based on optimization and algebraic graph theory. Robot. Auton. Syst. 136: 103686.10.1016/j.robot.2020.103686Search in Google Scholar

13. Engelmann, A. and Faulwasser, T., Decentralized conjugate gradients with finite-step convergence. (2021). arXiv:2102.12311.Search in Google Scholar

14. Engelmann, A., Y. Jiang, H. Benner, R. Ou, B. Houska and T. Faulwasser. 2021. ALADIN-α—an open-source MATLAB toolbox for distributed non-convex optimization. Optimal Control Appl. Methods 1–19.10.1002/oca.2811Search in Google Scholar

15. Engelmann, A., Y. Jiang, B. Houska and T. Faulwasser. 2020. Decomposition of nonconvex optimization via bi-level distributed ALADIN. IEEE Trans. Control Netw. Syst. 7(4): 1848–1858.10.1109/TCNS.2020.3005079Search in Google Scholar

16. Engelmann, A., G. Stomberg and T. Faulwasser. 2021. Toward decentralized interior point methods for control. In: Proc. 60th IEEE Conference on Decision and Control.10.1109/CDC45484.2021.9683694Search in Google Scholar

17. Giselsson, P. and S. Boyd. 2016. Linear convergence and metric selection for douglas-rachford splitting and ADMM IEEE Trans. Automat. Control 62(2): 532–544.10.1109/TAC.2016.2564160Search in Google Scholar

18. Giselsson, P. and A. Rantzer. 2014. On feasibility, stability and performance in distributed model predictive control. IEEE Trans. Automat. Control 59(4): 1031–1036.10.1109/TAC.2013.2285779Search in Google Scholar

19. Giselsson, P., M.D. Doan, T. Keviczky, B.D. Schutter and A. Rantzer. 2013. Accelerated gradient methods and dual decomposition in distributed model predictive control. Automatica 49(3): 829–833.10.1016/j.automatica.2013.01.009Search in Google Scholar

20. Groß, D. and O. Stursberg. 2013. On the convergence rate of a jacobi algorithm for cooperative distributed MPC. In: Proc. 52nd IEEE Conference on Decision and Control, pp. 1508–1513.10.1109/CDC.2013.6760096Search in Google Scholar

21. He, B. and X. Yuan. 2012. On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2): 700–709.10.1137/110836936Search in Google Scholar

22. Houska, B., J. Frasch and M. Diehl. 2016. An augmented lagrangian based algorithm for distributed nonconvex optimization. SIAM J. Optim. 26(2): 1101–1127.10.1137/140975991Search in Google Scholar

23. Jiang, Y., P. Sauerteig, B. Houska and K. Worthmann. 2020. Distributed optimization using ALADIN for MPC in smart grids. IEEE Trans. Control Syst. Technol..10.1109/TCST.2020.3033010Search in Google Scholar

24. Kögel, M. and R. Findeisen. 2012. Cooperative distributed MPC using the alternating direction multiplier method. IFAC Proc. Vol. 45(15): 445–450.10.3182/20120710-4-SG-2026.00159Search in Google Scholar

25. Köhler, J., M.A. Müller and F. Allgöwer. 2019. Distributed model predictive control—recursive feasibility under inexact dual optimization. Automatica 102: 1–9.10.1016/j.automatica.2018.12.037Search in Google Scholar

26. Latafat, P., N.M. Freris and P. Patrinos. 2019. A new randomized block-coordinate primal-dual proximal algorithm for distributed optimization. IEEE Trans. Automat. Control 64(10): 4050–4065.10.1109/TAC.2019.2906924Search in Google Scholar

27. Müller, M.A. and F. Allgöwer. 2017. Economic and distributed model predictive control: recent developments in optimization-based control. SICE J. Control Meas. Syst. Integr. 10(2): 39–52.10.9746/jcmsi.10.39Search in Google Scholar

28. Necoara, I. and D. Clipici. 2013. Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC. J. Process Control 23(3): 243–253.10.1016/j.jprocont.2012.12.012Search in Google Scholar

29. Nocedal, J. and S. Wright. 2006. Numerical Optimization. Springer Science & Business Media, New York.Search in Google Scholar

30. Richter, S., M. Morari and C.N. Jones. 2011. Towards computational complexity certification for constrained MPC based on Lagrange relaxation and the fast gradient method. In: Proc. 50th IEEE Conference on Decision and Control and European Control Conference, pp. 5223–5229.10.1109/CDC.2011.6160931Search in Google Scholar

31. Rostami, R., G. Costantini and D. Görges. 2017. ADMM-based distributed model predictive control: primal and dual approaches. In: Proc. 56th IEEE Conference on Decision and Control, pp. 6598–6603.Search in Google Scholar

32. Scattolini, R. 2009. Architectures for distributed and hierarchical model predictive control – a review. J. Process Control 19(5): 723–731.10.1016/j.jprocont.2009.02.003Search in Google Scholar

33. Scokaert, P., D. Mayne and J. Rawlings. 1999. Suboptimal model predictive control (feasibility implies stability). IEEE Trans. Automat. Control 44(3): 648–654.10.1109/9.751369Search in Google Scholar

34. Stathopoulos, G., H. Shukla, A. Szucs, Y. Pu and C.N. Jones. 2016. Operator splitting methods in control. Found. Trends Syst. Control 3(3): 249–362.10.1561/2600000008Search in Google Scholar

35. Stewart, B.T., S.J. Wright and J.B. Rawlings. 2011. Cooperative distributed model predictive control for nonlinear systems. J. Process Control 21(5): 698–704.10.1016/j.jprocont.2010.11.004Search in Google Scholar

36. Stewart, B.T., A.N. Venkat, J.B. Rawlings, S.J. Wright and G. Pannocchia. 2010. Cooperative distributed model predictive control. Systems Control Lett. 59(8): 460–469.10.1016/j.sysconle.2010.06.005Search in Google Scholar

37. Stomberg, G., A. Engelmann and T. Faulwasser. 2021. A distributed active set method for model predictive control. IFAC-PapersOnLine 54(3): 263–268.10.1016/j.ifacol.2021.08.252Search in Google Scholar

38. Venkat, A.N., I.A. Hiskens, J.B. Rawlings and S.J. Wright. 2008. Distributed MPC strategies with application to power system automatic generation control. IEEE Trans. Control Syst. Technol. 16(6): 1192–1206.10.1109/TCST.2008.919414Search in Google Scholar

39. Venkat, A.N., J.B. Rawlings and S.J. Wright. 2005. Stability and optimality of distributed model predictive control. In: Proc. 44th IEEE Conference on Decision and Control, pp. 6680–6685.10.1109/CDC.2005.1583235Search in Google Scholar

40. Wang, Y., W. Yin and J. Zeng. 2019. Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 78(1): 29–63.10.1007/s10915-018-0757-zSearch in Google Scholar

41. Xiao, G. and F. Liu. 2020. Observer-based cooperative distributed fault-tolerant model predictive control with imperfect network communication and asynchronous measurements. Internat. J. Robust Nonlinear Control 30(12): 4531–4549.10.1002/rnc.4994Search in Google Scholar

Received: 2021-08-09
Accepted: 2022-01-04
Published Online: 2022-03-25
Published in Print: 2022-04-26

© 2022 Stomberg et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.