Skip to content
BY 4.0 license Open Access Published by De Gruyter (O) July 3, 2020

Optimal control of timed event graphs with resource sharing and output-reference update

Optimalsteuerung zeitbehafteter Synchronisationsgraphen mit Ressourcenkonkurrenz und Aktualisierung von Referenzsignalen
Germano Schafaschek, Laurent Hardouin and Jörg Raisch

Abstract

Timed event graphs (TEGs) are a subclass of timed Petri nets that model synchronization and delay phenomena, but not conflict or choice. We consider a scenario where a number of TEGs share one or several resources and are subject to changes in their output-reference signals. Because of resource sharing, the resulting overall discrete event system is not a TEG. We propose a formal method to determine the optimal control input for such systems, where optimality is in the sense of the widely adopted just-in-time criterion. Our approach is based on a prespecified priority policy for the TEG components of the overall system. It builds on existing control theory for TEGs, which exploits the fact that, in a suitable mathematical framework (idempotent semirings such as the max-plus or the min-plus algebra), the temporal evolution of TEGs can be described by a set of linear time-invariant equations.

Zusammenfassung

Zeitbehaftete Synchronisationsgraphen (ZSGen) bilden eine spezielle Klasse zeitbehafteteter Petri-Netze. Sie können Synchronisations- und Verzögerungsphänomene modellieren, nicht aber Konflikte. Wir untersuchen ein Szenario, in dem sich mehrere ZSGen eine oder mehrere Ressourcen teilen und die Referenzsignale der ZSGen unvorhersehbaren Änderungen unterworfen sind. Da die beteiligten ZSGen um Ressourcen konkurrieren, ist das Gesamtsystem kein ZSG. Wir beschreiben eine formale Vorgehensweise zur Bestimmung des im just-in-time Sinne optimalen Stellsignals für dieses Gesamtsystem. Unser Ansatz basiert auf einer vorab festgelegten Priorisierung der einzelnen ZSGen. Er baut auf der existierenden Regelungstheorie für ZSGen auf und nutzt die Tatsache, dass sich die zeitliche Entwicklung von ZSGen in einem geeigneten mathematischen Rahmen (idempotente Halbringe wie beispielsweise die max-plus- oder die min-plus-Algebra) durch lineare zeitinvariante Gleichungen beschreiben lässt.

1 Introduction

In this paper, we consider a scenario where several discrete event subsystems, each modeled by a timed event graph (TEG), share one or more resources and where the reference signals for the subsystems may change unexpectedly. TEGs are a subclass of timed Petri nets. They are characterized by the fact that each place has precisely one upstream and one downstream transition and all arcs have weight one. TEGs can model synchronization and delay phenomena but not conflict, choice, or the sharing of resources. Therefore, the overall system class investigated in this contribution is more general than the subclass of TEGs. We argue that it models a wide range of application scenarios, e. g., in manufacturing or transportation. We aim at determining the optimal control input, where optimality is understood in the sense of the widely adopted just-in-time criterion. In particular, the aim is to fire all input transitions as late as possible while guaranteeing that the firing of output transitions is not later than specified by the respective reference signal. For example, in a manufacturing context, the firing of an input transition could correspond to the provisioning of raw material, while the firing of an output transition could model the completion of a workpiece. In general, a just-in-time policy aims at satisfying customer demands while minimizing internal stocks. We solve this optimal control problem for a fixed prioritization of subsystems, but allow updates of the reference signals.

It is a well-known fact that in a suitable mathematical framework, namely an idempotent semiring (or dioid) setting such as the max-plus or the min-plus algebra, the evolution of TEGs can be described by linear equations (see [3] for a thorough coverage). Based on such linear dioid models, an elaborate control theory has become available. Given an a-priori known reference signal, it mostly focuses on optimality in the sense of the just-in-time criterion and considers feedforward and (output or state) feedback control. For a tutorial introduction to this control framework, the reader may refer to [11].

In some applications, it may be necessary to update the reference for the system’s output during run-time, for instance when customer demand is increased and a new production objective must be considered. In [12], a strategy has been presented to optimally update the input in face of such changes in the output-reference. In case the new reference encodes unachievable requirements, the authors show how to relax it such that optimal control for the relaxed reference leads to firing times of the output transitions that are as close as possible to (but possibly later than) the originally desired ones.

Systems of practical interest often involve limited resources that are shared among different subsystems. As examples, one can think of an automated manufacturing cell where the same tool/robot may be required in several steps of the production process, or of computational tasks competing for the use of a fixed number of processors. TEGs do not allow for concurrency or choice and hence are inapt to model such resource-sharing phenomena. Overcoming this limitation has motivated several efforts in the literature, with a predominant focus on modeling and analysis. In [7], [6], a modeling strategy for continuous timed Petri nets is proposed where conflict places are handled by the use of priority rules. In [8], constraints due to resource sharing are translated into additional inequalities in the system model. [1] models conflicting TEGs by max-plus time-varying equations; the models are restricted to safe conflict places. [5] relaxes the safety hypothesis on the conflict places and studies cycle time evaluation on conflicting TEGs with multiple shared resources. Works focusing on control of TEGs with resource sharing are less abundant. In [15], the authors show that concurrency can be incorporated into switching max-plus linear systems models and apply model predictive control techniques to obtain the optimal switching sequence. In [13], the modeling and control of a number of TEGs that share resources is addressed. Obviously, because of resource sharing, the overall system is no longer a TEG. Under a prespecified priority policy, the authors show how to compute the optimal (just-in-time) input for each subsystem with respect to its individual output-reference.

In this paper, we propose a formal method to obtain the optimal control inputs in face of changes in the output-references for TEGs that share resources under a given priority policy, thus merging the results from [12] with those of [13]. This paper represents an extended version of a recent conference paper [14]. It extends the contributions of [14] in the following ways: the results are presented more didactically, with more detailed explanations of some crucial steps. Most significantly, we generalize the method to the case of an arbitrary number of shared resources (Section 5.3 is entirely new), whereas in [14] only the case of a single shared resource is explicitly covered; as a prerequisite thereto, we also formalize the extension of the results from [13], originally only presented for at most two shared resources, to the case of multiple shared resources — Section 4.3 originates here. The examples in this paper, although still simple, are more general and elucidative than those from [14]. We also enhance the presentation of preliminary concepts, providing a brief survey of the theoretical background and making the paper largely self-contained.

Prospective applications for the proposed approach include emergency call centers (as studied, e. g., in [2], which, in turn, is based on [7], [6]) where the arrival of high-priority calls may render it necessary to reschedule the answers to lower-priority ones, or manufacturing scenarios where changes in the demand of high-priority products will require a re-adjustment of resource allocations by processing steps related to lower-priority products.

We consider a set of TEGs operating under optimal schedules with respect to their individual output-references and to the priority policy; supposing the output-reference of one or more of the subsystems is updated during run-time, we show how to optimally update all their inputs so that their outputs are as close as possible to the corresponding new references and the priority policy is still observed. In case the performance limitation of the subsystems, combined with the limited availability of the resources, make it impossible to respect some of the new references, we also provide the optimal way to relax such references so that the ultimately obtained inputs lead to tracking them as closely as possible.

The examples presented along this paper serve solely the purpose of illustrating and helping to clarify the results. Due to space limitations and also to the fact that existing computational tools are not in pace with the state-of-the-art theoretical results proposed here, we do not present a more comprehensive example. The proposed method can, however, be applied to larger, more general systems of practical relevance.

The paper is organized as follows. Section 2 summarizes well-known facts on idempotent semirings. In Section 3, we adapt existing results on the control of TEGs with output-reference update to the idempotent semiring used in this paper. Section 4 provides an overview of previous results on modeling and control of TEGs with shared resources. The major purpose of these three sections is making the paper as self-contained as possible. In Section 5, the main contributions of the paper are presented; namely, we formulate and solve the problem of determining the optimal control inputs for TEGs with shared resources in face of changes in the output-references. Section 6 presents the conclusions and final remarks.

2 Preliminaries

In this section, we present a summary of some basic definitions and results on idempotent semirings and timed event graphs; for an exhaustive discussion, the reader may refer to [3]. We also touch on some topics from residuation theory and control of TEGs (see [4] and [11], respectively).

2.1 Idempotent semirings

An idempotent semiring (or dioid) D is a set endowed with two binary operations, denoted ⊕ (sum) and ⊗ (product), such that: ⊕ is associative, commutative, idempotent (i. e., (aD)aa=a), and has a neutral (zero) element, denoted ε; ⊗ is associative, distributes over ⊕, and has a neutral (unit) element, denoted e; the element ε is absorbing for ⊗ (i. e., (aD)aε=ε). As in conventional algebra, the product symbol ⊗ is often omitted. An order relation can be defined over D by

(1)(a,bD)abab=b.

Note that ε is the bottom element of D, as (aD)εa.

An idempotent semiring D is complete if it is closed for infinite sums and if the product distributes over infinite sums. For a complete idempotent semiring, the top element is defined as =xDx, and the greatest lower bound (or infimum) operation, denoted ∧, by

(a,bD)ab=xa,xbx.

∧ is associative, commutative, and idempotent, and we have ab=babab=a.

Example 1.

The set Z=defZ{,+}, with the minimum operation as ⊕ and conventional addition as ⊗, forms a complete idempotent semiring called min-plus algebra, denoted Zmin, in which ε=+, e=0, and =. Note that in Zmin we have 2⊕5=2, so 5⪯2; the order is reversed with respect to the conventional order over Z.  ♢

Remark 2 ([3]).

The set of n×n-matrices with entries in an idempotent semiring D, endowed with sum and product operations defined by

(AB)ij=AijBij,(AB)ij=k=1n(AikBkj),

for all i,j{1,,n}, forms a complete idempotent semiring, denoted Dn×n. Its unit element (or identity matrix) is the n×n-matrix with entries equal to e on the diagonal and ε elsewhere; the zero (resp. top) element is the n×n-matrix with all entries equal to ε (resp. ⊤). The definition of order (1) implies, for any A,BDn×n,

AB(i,j{1,,n})AijBij.

It is possible to deal with nonsquare matrices in this context (including, in particular, row and column vectors) by suitably padding them with ε-rows or columns; this is done only implicitly, as it does not interfere with the relevant parts of the results of operations between matrices.   ♢

A mapping Π:DC, with D and C two idempotent semirings, is isotone if (a,bD)abΠ(a)Π(b).

Remark 3.

The composition of two isotone mappings is isotone.  ♢

Remark 4.

Let Π be an isotone mapping over a complete idempotent semiring D, and let Y={xD|Π(x)=x} be the set of fixed points of Π. yYy (resp. yYy) is the least (resp. greatest) fixed point of Π.  ♢

Algorithms exist which allow to compute the least and greatest fixed points of certain isotone mappings over complete idempotent semirings. In particular, the algorithm presented in [11] is applicable to the relevant mappings considered in this paper.

In a complete idempotent semiring D, the Kleene star operator on aD is defined as a=i0ai, with a0=e and ai=ai1a for i>0.

Remark 5.

The implicit equation x=axb over a complete idempotent semiring D admits x=ab as least solution. This applies, in particular, to the case in which x,bDn and aDn×n (see [3] and Remark 2).  ♢

2.2 Semirings of formal power series

Let s={s(t)}tZ be a sequence over Zmin. The δ-transform of s is a formal power series in δ with coefficients in Zmin and exponents in Z, defined by

s=tZs(t)δt.

We denote both the sequence and its δ-transform by the same symbol, as no ambiguity will occur. Since

sδ=tZs(t)δt+1=tZs(t1)δt,

multiplication by δ can be seen as a backward shift operation.

Definition 6.

The set of formal power series in δ with coefficients in Zmin and exponents in Z, with addition and multiplication defined by

ss=tZ(s(t)s(t))δt,ss=tZ(τZ(s(τ)s(tτ)))δt,

is a complete idempotent semiring, denoted Zmin[[δ]]. Note that the order in Zmin[[δ]] is induced by the order in Zmin, i. e., ss(tZ)s(t)s(t).  ♢

In this paper we will use sequences to represent the number of firings of transitions in TEGs, so that each term s(t) refers to the accumulated number of firings of a certain transition up to time t. Naturally, this interpretation carries over to the terms of a series s corresponding to the δ-transform of such a sequence. A series thus obtained is clearly nonincreasing (in the order of Zmin, which, as pointed out before, is the reverse of the standard order of Z), meaning s(t1)s(t) for all t. We will henceforth refer to such series as counters.

Definition 7.

The set of counters (i. e., nonincreasing power series) in Zmin[[δ]] is a complete idempotent semiring, named Zmin,δ[[δ]], with zero element sε given by sε(t)=ε for all t, unit element se given by se(t)=e for t0 and se(t)=ε for t>0, and top element s given by s(t)= for all t. We will denote this semiring by Σ, for brevity.  ♢

It is easy to see that sε, se, respectively s are indeed the zero, unit, respectively top elements in Σ: sΣ, tZ,

(ssε)(t)=s(t)sε(t)=s(t);(sse)(t)=τZs(τ)se(tτ)=τts(τ)=s(t)(assis nonincreasing);(ss)(t)=s(t)s(t)=.

Counters can be represented compactly by omitting terms s(t)δt whenever s(t)=s(t+1). For example, a counter s with s(t)=e for t3, s(t)=1 for 4t7, s(t)=3 for 8t12, and s(t)=6 for t13 can be written s=eδ31δ73δ126δ+.

2.3 TEG models in idempotent semirings

A timed Petri net is a tuple (P,T,A,w,h,v), where P is a finite set of places (graphically represented by circles), T a finite set of transitions (represented by bars), A(P×T)(T×P) a set of arcs connecting places to transitions and transitions to places, w a weight function assigning a positive integer weight to every arc, and h a function assigning a nonnegative holding time to each place. In the following, holding times will be restricted to be integers. Furthermore, the function v assigns to each place a nonnegative integer number of tokens residing initially in this place. For any pP and tT, if (p,t)A, we say that p is an upstream place of t, and t is a downstream transition of p; analogously, if (t,p)A, t is said to be an upstream transition of p, and p is a downstream place of t. The dynamics of a timed Petri net is governed by the following rules: (i) a transition t can fire if all its upstream places p contain at least w((p,t)) tokens that have resided there for at least h(p) time units; (ii) if a transition t fires, it removes w((p,t)) tokens from each of its upstream places p and deposits w((t,p¯)) tokens in each of its downstream places p¯.

Timed event graphs (TEGs) are timed Petri nets in which each place has exactly one upstream and one downstream transition and all arcs have weight 1. In a TEG, we can distinguish input transitions (those that are not affected by the firing of other transitions), output transitions (those that do not affect the firing of other transitions), and internal transitions (those that are neither input nor output transitions). In this paper, we will limit our discussion to SISO TEGs, i. e., TEGs with only one input and one output transition, which we denote respectively by u and y; internal transitions are denoted by xi. An example of a SISO TEG is shown in Fig. 1.

Figure 1 A SISO TEG, with input u and output y.

Figure 1

A SISO TEG, with input u and output y.

A TEG is said to be operating under the earliest firing rule if every internal and output transition fires as soon as it is enabled.

With each transition xi, we associate a sequence {xi(t)}tZ, for simplicity denoted by the same symbol, where xi(t) represents the accumulated number of firings of xi up to and including time t. Similarly, we associate sequences {u(t)}tZ and {y(t)}tZ with transitions u and y, respectively. Considering the TEG from Fig. 1 operating under the earliest firing rule, in conventional algebra we have

x1(t)=min(u(t),x2(t2)+2),

i. e., the number of firings of transition x1 up to time t is the minimum between the number of firings of transition u up to time t and the number of firings of transition x2 up to time t2 (because the place connecting x2 to x1 has holding time 2) plus 2 (as the place connecting x2 to x1 has initially 2 tokens).

In Zmin, the number of firings of transition x1 can be conveniently rewritten as

(tZ)x1(t)=u(t)2x2(t2),

which, through the δ-transform, can be expressed in Σ as

x1=u2δ2x2.

We can obtain similar relations for x2 and y and, defining the vector x=[x1x2], write

x=ε2δ2eδ3εxeδ0εu,y=εeδ0x.

In general, a TEG can be described by implicit equations over Σ of the form

(2)x=AxBu,y=Cx.

From Remark 5, the least solution of (2) is given by

(3)y=CABu,

where G=CAB is often called the transfer function of the system. For instance, for the system from Fig. 1 we obtain the (scalar) transfer function G=eδ3(2δ5).

2.4 Residuation theory

Residuation theory provides, under certain conditions, greatest (resp. least) solutions to inequalities such as f(x)b (resp. f(x)b).

Definition 8.

An isotone mapping f:DC, with D and C complete idempotent semirings, is said to be residuated if for all yC there exists a greatest solution to the inequality f(x)y. This greatest solution is denoted f(y), and the mapping f:CD, y{xD|f(x)y}, is called the residual of f.

Mapping f is said to be dually residuated if for all yC there exists a least solution to the inequality f(x)y. This least solution is denoted f(y), and the mapping f:CD, y{xD|f(x)y}, is called the dual residual of f.   ♢

Note that, if equality f(x)=y is solvable, f(y) and f(y) yield its greatest and least solutions, respectively.

Theorem 9 ([4]).

Mapping f as in Def.8is residuated if and only if there exists a unique isotone mappingf:CDsuch that[1]ffIdCandffIdD, whereIdCandIdDare the identity mappings onCandD, respectively.  ♢

Remark 10.

For aD, mapping La:DD, xax, is residuated; its residual is denoted by (“left division by a”). More generally, for ADn×m, mapping LA:Dm×pDn×p, XAX, is residuated; can be computed as follows: for all 1im and 1jp,

(see [3] and Remark 2).  ♢

Theorem 11 ([4]).

Mapping f as in Def.8is dually residuated if and only iff()=and(AD)f(xAx)=xAf(x).  ♢

2.5 Optimal control of TEGs

Assume that a TEG to be controlled is modeled by equations (2) and that an output-reference zΣ is given. Under the just-in-time paradigm, we aim at firing the input transition u the least possible number of times while guaranteeing that the output transition y fires, by each time instant, at least as many times as specified by z. In other words, we seek the greatest u (in the order of Σ) such that y=Guz. Based on (3) and Remark 10, the solution is directly obtained by

Example 12.

For the TEG from Fig. 1, suppose it is required that transition y fires once at time t=43, twice at t=47, and three times at t=55, meaning the accumulated number of firings of y should be e (=0) for t42, 1 for 43t46, 3 for 47t54, and 6 for t55. This is represented by the output-reference z=eδ421δ463δ546δ+. Applying (4), we get uopt=eδ381δ412δ433δ464δ516δ+, and the corresponding optimal output is yopt=Guopt=eδ411δ442δ463δ494δ546δ+. One can verify that yoptz. These computations can be performed with the aid of the C++ toolbox introduced in [9]. We interpret the place with holding time 3 between x1 and x2, initially empty, as the operation of the system, and the bottom place with holding time 2 between x2 and x1, with two initial tokens, as a double-capacity resource. Under this interpretation, the firings of transitions x1 and x2 represent resource-allocation and resource-release events, respectively. This paves the way for the examples of Sections 4 and 5, where the resource will be shared with other (sub)systems. The optimal schedule obtained above can be displayed in a chart as shown in Fig. 2, where each row corresponds to one instance of the resource.  ♢

3 Optimal control of TEGs with output-reference update

The material of this section is a dual version, adapted to the point of view of counters, of the results from [12].

In practice, it may be necessary to update the reference for the output of a system during run-time, for instance when customer demand is increased and a new production objective must be taken into account. For a system like the one from Example 12, let reference z be updated to a new one, z, at time T. The problem at hand is to find the input uopt which optimally tracks z without, however, changing the inputs given up to time T. Define the mapping rT:ΣΣ,

(5)[rT(u)](t)=u(t),iftT;ε,ift>T.

Our objective can then be restated as follows: find the greatest element uopt of the set

F={uΣ|GuzandrT(u)=rT(uopt)},

where uopt is the optimal input with respect to reference z, computed as in (4). The following theorem provides, given that certain conditions are met, a way to compute this greatest element.

Theorem 13 ([12]).

LetDandCbe complete idempotent semirings,f1,f2:DCresiduated mappings, andc1,c2C. If the set

S={xD|f1(x)c1andf2(x)=c2}
is nonempty, we havexSx=f1(c1)f2(c2).  ♢

Figure 2 Optimal schedule obtained in Example 12; the gray bars represent the operation of the system, and the dashed bars are the delays imposed by the resource.

Figure 2

Optimal schedule obtained in Example 12; the gray bars represent the operation of the system, and the dashed bars are the delays imposed by the resource.

An obvious correspondence between F and S can be established by taking D and C both as Σ, f1 as LG (which is well known to be residuated — see Remark 10), c1 as z, f2 as rT, and c2 as rT(uopt).

Remark 14.

Mapping rT as defined in (5) is residuated, with

[rT(u)](t)=u(t),iftT;u(T),ift>T.

In fact, rT is clearly isotone and we have rTrT=rTIdΣ and rTrT=rTIdΣ, so the conditions from Theorem 9 are fulfilled.  ♢

Hence, as long as set F is nonempty, Theorem 13 provides the desired solution

In order to check for nonemptiness of F, let us consider the set

F˜={uΣ|rT(u)=rT(uopt)},

i. e., the set of counters that up to and including time T are identical to uopt. Consider now

u_=defuF˜u=rT(uopt).

Since rTrT=rT and therefore rT(u_)=rT(rT(uopt))=rT(uopt), u_F˜. Isotony of LG thus implies

(7)FGu_z.

Example 15.

For the system from Example 12 (Fig. 1) operating according to the optimal input obtained for output-reference z, suppose that at time T=40 a new demand is received: three firings of y are now required at t=54 (instead of at t=55). This translates to z=eδ421δ463δ536δ+. In order to determine whether F, following (7) we check if Gu_z. We have u_=rT(uopt)=eδ381δ40εδ+, so Gu_=eδ411δ432δ463δ484δ515δ536δ567δ58=(eδ411δ43)(2δ5)z, implying F. From Theorem 13 (and recalling that rTrT=rT), we then have , and hence yopt=eδ411δ432δ463δ484δ536δ+. The updated optimal schedule is shown in Fig. 3, to be interpreted as explained in Example 12.  ♢

Figure 3 Updated optimal schedule obtained in Example 15; the gray bars represent the operation of the system, whereas the dashed bars are the delays imposed by the resource.

Figure 3

Updated optimal schedule obtained in Example 15; the gray bars represent the operation of the system, whereas the dashed bars are the delays imposed by the resource.

In case Gu_z (and hence F=), this means the past inputs make it impossible for the system to respect z. Intuitively, having implemented a just-in-time policy uopt for a reference z up to time T may make it impossible to satisfy a more demanding new reference z. Since the condition rT(u)=rT(uopt) cannot be relaxed, in order to have a solution we must then increase z; more precisely, we wish to find the least counter zz such that

Fz={uΣ|GuzandrT(u)=rT(uopt)}

is not empty. The following result provides the answer.

Proposition 16.

The least counterzzsuch thatFzisz=z(Gu_).

Proof.

Since Gu_z(Gu_)=z, we have u_Fz, therefore Fz. Take now an arbitrary z˜z such that Fz˜, and take any vFz˜. Clearly vF˜ and hence u_v; as LG is isotone, we have Gu_Gvz˜, implying z=z(Gu_)zz˜=z˜.  □

A correspondence between Fz and S can be established analogously to that between F and S, only taking c1 as z (instead of z). Applying Theorem 13 and recalling that rTrT=rT, we obtain

Note that in case F we have z=z(Gu_)=z and therefore recover solution (6).

Example 17.

Consider, once more, the system from Example 12 (Fig. 1) operating according to the optimal input obtained for output-reference z, and suppose the same new output-reference z as in Example 15 is received, only now at time T=42 instead of at time 40. We again use (7) to check whether set F is empty; in this case, we obtain u_=eδ381δ412δ42εδ+ and hence Gu_=eδ411δ442δ463δ494δ515δ546δ567δ59=(eδ411δ44)(2δ5)z, implying F=. So, we seek the least zz such that Fz; according to Proposition 16, we get z=z(Gu_)=eδ421δ463δ535δ546δ+, which is the reference we shall effectively track. Then, from (8) we have uopt=eδ381δ412δ433δ464δ505δ516δ+, and hence yopt=eδ411δ442δ463δ494δ535δ546δ+. The updated optimal schedule is shown in Fig. 4, to be interpreted as explained in Example 12.  ♢

Figure 4 Updated optimal schedule obtained in Example 17; the gray bars represent the operation of the system, whereas the dashed bars are the delays imposed by the resource.

Figure 4

Updated optimal schedule obtained in Example 17; the gray bars represent the operation of the system, whereas the dashed bars are the delays imposed by the resource.

Figure 5 A number of TEGs with a single shared resource.

Figure 5

A number of TEGs with a single shared resource.

4 Modeling and optimal control of TEGs with resource sharing

We now turn our attention to systems in which a number of TEGs share one or multiple resources. We first focus on the simple case of a single shared resource (Sections 4.1 and 4.2); the discussion is based on [13], where the authors also present the more general case of two shared resources. Here, we take it one step further and explicitly generalize the approach to the case of arbitrarily many shared resources (Section 4.3).

4.1 Modeling of TEGs with one shared resource

Consider a system consisting of TEGs S1,,SK sharing a resource (with arbitrary capacity), as illustrated in Fig. 5. Hk represents the internal dynamics of Sk. β may, in general, be a TEG (or, in simple cases, just a single place) describing the capacity of the resource as well as the minimal delay between release and allocation events. Clearly, the overall system is no longer a TEG, as there are places with several upstream and/or several downstream transitions. For simplicity, let us assume that input transitions (uk) are connected to resource-allocation transitions (xAk) via a single place with zero delay and no initial tokens, the same being true for the connection between resource-release transitions (xRk) and output transitions (yk). These assumptions will be dropped in Section 4.3.

It is not possible to model systems exhibiting resource-sharing phenomena by linear equations such as (2). Considering a system like the one from Fig. 5, in order to express the relationship among counters xAk and xRk, k{1,,K}, the Hadamard product of series is introduced ([10]).

Definition 18.

The Hadamard product of s1,s2Σ, written s1s2, is the counter defined as follows:

(tZ)(s1s2)(t)=s1(t)s2(t).

This operation is commutative, distributes over ⊕ and ∧, has neutral element eδ+, and sε is absorbing for it (i. e., (sΣ)ssε=sε).  ♢

Figure 6 A join and a fork structure.

Figure 6

A join and a fork structure.

Consider a join structure (i. e., a place with two or more incoming transitions) as shown in Fig. 6. At any time instant t, the accumulated number of firings of γ, in conventional algebra, cannot exceed that of λ1 and λ2 combined, which translates to λ1λ2γ. Similarly, for a fork structure (i. e., a place with two or more outgoing transitions) such as the one shown in Fig. 6, the accumulated number of firings of γ1 and γ2 combined — again, in conventional algebra — can never exceed that of λ, meaning λγ1γ2.

Generalizing these ideas allows us to write, for the system from Fig. 5,

xR1xRKα1andα2xA1xAK

which, combined with βα1α2, leads to

(9)β(k=1KxRk)k=1KxAk.

4.2 Optimal control of TEGs with one shared resource

For a system like the one from Fig. 5, competition for the resource is, in general, going to make it impossible for all subsystems to concurrently follow a just-in-time schedule with respect to their individual output-references. One way to settle the dispute is introducing a priority policy among the subsystems. We henceforth assume, without loss of generality, that subsystem Sk has higher priority than Sk+1, for all k{1,,K1}. The priority policy is based on a simple rule: for each k{2,,K} and for all j{1,,k1}, Sk cannot interfere with the performance of Sj.

Figure 7 Optimal schedules obtained in Example 20; the gray, black, and crosshatched bars represent the operation of S1{S^{1}}, S2{S^{2}}, and S3{S^{3}}, respectively, whereas the dashed bars are the delays imposed by the resource.

Figure 7

Optimal schedules obtained in Example 20; the gray, black, and crosshatched bars represent the operation of S1, S2, and S3, respectively, whereas the dashed bars are the delays imposed by the resource.

Let the input-output behavior of each Sk, ignoring all other subsystems, be described by yk=Gkuk — which, according to the assumptions made above, is equivalent to xRk=GkxAk — and assume that corresponding references zk are given. The subsystem with highest priority, S1, is free to use the resource at will; therefore, we can effectively neglect all other subsystems and simply compute its optimal input by (cf. Section 2.5). For S2, we must compute the optimal input under the restriction that the optimal behavior of S1 is unchanged; based on (9), this means we must respect

(10)β(xRopt1xR2)xAopt1xA2.

In fact, we want to determine the greatest xA2 — and thus also the corresponding u2 — satisfying both G2u2z2 and (10); seeing that (10) implies

the following result comes in handy.

Proposition 19 ([10]).

For anyaΣ, the mappingΠa:ΣΣ,xax, is residuated. For anybΣ,Πa(b), denotedba, is the greatestxΣsuch thataxb.  ♢

From Proposition 19, inequality (11) leads to

writing xR2=G2xA2 and combining (12) with G2xA2z2 yields

which, in turn, implies

Since for any s1,s2Σ it holds that s1s2s1=s1s2, one can see that (13) is equivalent to

The greatest xA2 satisfying (14), xAopt2, is the greatest fixed point (provided it exists) of the mapping Φ2:ΣΣ,

Notice that Φ2 consists in a succession of order-preserving operations (Hadamard product ⊙ and its residual , left-division , and infimum ∧), which, in turn, can be seen as the composition of corresponding isotone mappings (for instance, following the notation of Proposition 19, s1s2 corresponds to Πs1(s2), and similarly for the other operations). Therefore, according to Remark 3Φ2 is also isotone; Remark 4 then ensures the existence of its greatest fixed point, which yields the desired optimal solution xAopt2(=uopt2).

Using the same procedure, we obtain, for each k,

and, defining a mapping Φk by analogy with (15), its greatest fixed point provides xAoptk and, therefore, also uoptk.

Figure 8 Three TEGs sharing a resource with capacity 2.

Figure 8

Three TEGs sharing a resource with capacity 2.

Figure 9 A number of TEGs with multiple shared resources.

Figure 9

A number of TEGs with multiple shared resources.

Example 20.

Consider the system from Fig. 8, where subsystems S1, S2, and S3 share a resource with capacity 2. S1, including the resource and ignoring S2 and S3, is the system from Example 12, whose transfer function is G1=eδ3(2δ5) (cf. Section 2.3). For S2 and S3, we obtain G2=eδ5(2δ7) and G3=eδ2(2δ4), respectively. In this example, β=2δ2. The references z1=eδ421δ463δ556δ+, z2=eδ391δ502δ543δ+, and z3=eδ523δ+ are given. As S1 has the highest priority, we can simply compute and yopt1=G1uopt1=eδ411δ452δ463δ504δ556δ+. Next, we determine xAopt2 by following the procedure described in this section. Computing the greatest fixed point of Φ2 as in (15), we get xAopt2=eδ281δ312δ353δ+(=uopt2) and xRopt2=eδ331δ362δ403δ+(=yopt2). Finally, the greatest fixed point of

yields xAopt3=eδ241δ272δ483δ+(=uopt3), and so xRopt3=eδ261δ292δ503δ+(=yopt3). These optimal schedules are shown in Fig. 7. Because the availability of the resource for S2 is subject to the operation of S1, the firings of y2 have to be considerably earlier than required by z2; this is, however, the latest they can be so as to respect z2 without interfering with S1. A similar effect can be observed in y3 due to the limitations imposed by the operations of S1 and S2.  ♢

4.3 Modeling and optimal control of TEGs with multiple shared resources

Consider, as before, a system comprising K TEGs S1,,SK, but now suppose they share L resources, as shown in Fig. 9. Similarly to Section 4.1, each β, {1,,L}, is a TEG (or possibly just a place) describing the capacity as well as the minimal delay between release and allocation events of resource . We denote by xAk (resp. xRk) the transition — and associated counter — representing the allocation (resp. release) of resource by subsystem Sk. Accordingly, Hk denotes the internal dynamics of Sk between xAk and xRk. As opposed to Section 4.1, here we consider that there may be also some dynamics between input transitions (uk) and resource-allocation transitions for the first resource (xAk1), modeled by TEGs (or, again, simply single places) called Pk1, as well as between resource-release transitions for the last resource (xRkL) and output transitions (yk), called Pk(L+1). The TEG (or single place) describing the dynamics between the release of resource 1 and the allocation of resource by Sk (i. e., between xRk(1) and xAk) is denoted Pk.

Through the same reasoning as applied in Section 4.1, it is straightforward to conclude that, for each {1,,L}, the relationship among counters xAk and xRk must respect

(16)β(k=1KxRk)k=1KxAk.

The optimal (just-in-time) schedule for the usage of the resources is sought under the same priority policy as in Section 4.2. Let the input-output behavior of each Sk, considering the resources and ignoring all other subsystems, be described as usual by yk=Gkuk, and let us again assume corresponding references zk to be given. For S1, we can simply compute the optimal input by . Based on uopt1, we can obtain the optimal firing schedules for the remaining transitions of S1. For instance, we have xAopt11=P11uopt1 and xRopt11=H11xAopt11. In general, for each {2,,L} we can then successively compute xAopt1=P1xRopt1(1) and xRopt1=H1xAopt1.

In order to determine the optimal input uopt2 for S2 — i. e., the greatest u2 such that G2u2z2 — while guaranteeing no interference with the optimal behavior of S1, based on (16) we must have, for each {1,,L},

(17)β(xRopt1xR2)xAopt1xA2.

Notice that, for a just-in-time input u2 computed so that (17) holds for =1, it follows that xA21=P21u2, and hence xR21=H21xA21=H21P21u2. In fact, the optimal input we seek is such that (17) holds for every and, furthermore, such that a just-in-time behavior is enforced throughout the system, implying xA2=P2xR2(1) for all {2,,L}. This means we can express any xA2 in terms of u2; defining the terms

P2=P21,if=1,P2H2(1)P2(1),if2L,

we have xA2=P2u2, and hence xR2=H2xA2=H2P2u2. Then, we can rewrite (17) as

β(xRopt1(H2P2u2))xAopt1(P2u2),

which, proceeding similarly to Section 4.2, leads to

Define, for each {1,,L}, the mapping Φ2:ΣΣ,

We seek the greatest u2 such that and ({1,,L})u2Φ2(u2). This amounts to looking for the greatest fixed point of the (isotone) mapping Φ2:ΣΣ,

The same arguments presented above can be applied to determine uoptk for an arbitrary k{1,,K}. Defining

(19)Pk=Pk1,if=1,PkHk(1)Pk(1),if2L,

and expressing each xAk and xRk in terms of uk, from (16) we obtain, for each {1,,L},

β((i=1k1xRopti)(HkPkuk))iiiiiiiiiiiii(i=1k1xAopti)(Pkuk).

Then, proceeding as before and defining the mapping Φk:ΣΣ,

for each {1,,L}, the greatest uk such that and ukΦk(uk) for all {1,,L} is given by the greatest fixed point of Φk:ΣΣ,

4.4 Supplementary remarks

Proposition 21 (Adapted from [10]).

LetΣ˜={sΣ|(tZ)s(t){ε,}}. For anyaΣ˜, the mappingΠa:ΣΣ,xax, is dually residuated. For anybΣ,Πa(b), denotedba, is the leastxΣsuch thataxb.

Proof.

For an arbitrary aΣ˜, we have (tZ)a(t)=, therefore Πa(s)=as=s. Moreover, since ⊙ distributes over ∧ (cf. Def. 18), for any AΣ it holds that Πa(xAx)=a(xAx)=xA(ax)=xAΠa(x). The result then follows from Theorem 11.  □

Remark 22 ([10]).

Given two counters x1,x2Σ, the series sZmin[[δ]] defined by (tZ)s(t)=x1(t)x2(t) is not necessarily a counter; x1x2 is the greatest counter less than or equal to s (in the order of Zmin[[δ]]). Similarly, provided x2Σ˜ (cf. Proposition 21), x1x2 is the least counter greater than or equal to s.  ♢

Note that, in Proposition 21, the restriction of a to the subset Σ˜ is necessary for Πa to meet the conditions of Theorem 11. In fact, if a(τ)=ε for some τZ, (as)(τ)=ε=ε, so ass. If a(ρ)= for some ρZ, one can show that (AΣ)Πa(xAx)xAΠa(x). Taking, for instance, A=Σ˜, as xΣ˜x=sε we have Πa(xΣ˜x)=a(xΣ˜x)=sε; on the other hand, for any xΣ˜ we have x(ρ)ε and hence (ax)(ρ)=x(ρ)=, showing that (xΣ˜Πa(x))(ρ)=(xΣ˜(ax))(ρ)=.

Remark 23.

Since we take a term like ηδτ to mean that a transition has accumulated η firings by time τ, it is reasonable to assume that the counters u, xi, and y (cf. Section 2.3) are elements of Σ˜. Note, additionally, that for any finite subset BΣ˜ one has sBsΣ˜ and sBsΣ˜.  ♢

5 Optimal control of TEGs with resource sharing and output-reference update

In this section, as the main contribution of this paper, we incorporate the ideas discussed in Section 3 to the class of systems studied in Section 4 by showing how to determine the optimal (just-in-time) control inputs in face of changes in the output-references for TEGs that share resources under a given priority policy. We again emphasize that, in this setting, the overall system is not a TEG.

This section is structured similarly to Section 4, starting with the simple case of a single shared resource (Sections 5.1 and 5.2) and then generalizing to the case of multiple resources (Section 5.3).

5.1 Problem formulation: the case of a single shared resource

Consider the system from Fig. 5 and assume every subsystem Sk is operating optimally with respect to its own output-reference zk, according to the priority-based strategy introduced in Section 4.1. Now, suppose that at time T each Sk has its reference zk updated to zk (with the possibility that zk=zk for some of them). Analogously to Section 3, we seek, for each k, the input uoptk which leads the corresponding output to optimally track zk while preserving the input uoptk up to time T. The crucial difference is that now the priority scheme must be observed and, furthermore, the past resource allocations by subsystems with lower priority must also be respected. Such allocations are relevant — despite having occurred before time T — because the respective resource releases may take place after T, thus influencing the availability of the resource in the meantime.

For the purpose of the discussion to follow, let us fix an arbitrary k{1,,K}. When updating the input of Sk, we require minimal interference from lower-priority subsystems (i. e., all Sj with j{k+1,,K}). This means that we have to respect past resource allocations in these subsystems, but may ignore future ones. Recall that xAoptj(t) is the accumulated number of firings originally scheduled for xAj up to time t. Respecting the past means that the firings which have already occurred by time T (when the new references are received) cannot be revoked. On the other hand, the prospective firings that have not taken place by time T can still be postponed and hence, from the point of view of Sk, ignored. In other words, for the sake of determining uoptk=xAoptk with minimal interference from Sj, we preserve the terms xAj(t)=xAoptj(t) for tT and neglect all new firings by making xAj(t)=xAoptj(T) for t>T. Recalling Remark 14, this is precisely captured by the counter rT(xAoptj).

In sum, (i) we must compute xAoptk in decreasing order of priority, i. e., start from k=1 and proceed up to k=K; (ii) when calculating xAoptk for k>1, we must consider xAopti for every i{1,,k1}; (iii) when calculating xAoptk for k<K, we must consider rT(xAoptj) for every j{k+1,,K}.

It will be convenient to define the following terms:

HAk=i=1k1xAopti,HRk=i=1k1(GixAopti),LAk=j=k+1KrT(xAoptj),LRk=j=k+1K(GjrT(xAoptj)).
HAk combines the counters xAopti of all subsystems Si with priority higher than that of Sk, referring to the already-updated optimal schedules of resource-allocation transitions xAi with respect to the corresponding updated references zi; accordingly, HRk combines the counters xRopti=GixAopti representing the respective resource-release events. In a similar way, LAk combines the counters rT(xAoptj) of all subsystems Sj with priority lower than that of Sk, representing the past firings (up to time T) of resource-allocation transitions xAj and neglecting their firings after time T, whereas LRk gathers the respective resource-release events by combining the counters GjrT(xAoptj); it should be emphasized that, even though we only consider the resource allocations by Sj up to time T, the respective resource-release events may take place after T, so in general one may have GjrT(xAoptj)rT(xRoptj).

Thus, based on (9) and on the foregoing discussion, in order to update uk=xAk without compromising the performance of higher-priority subsystems and, at the same time, ensuring minimal interference of lower-priority subsystems while taking into account their past resource allocations, we must respect

(⋆)β(HRk(GkxAk)LRk)HAkxAkLAk,

where it is understood that for k=1 (resp. k=K), the degenerate terms HA1 and HR1 (resp. LAK and LRK) are to be neglected.

The problem of determining the new optimal input uoptk (=xAoptk) with respect to a reference zk given at time T can be formulated as follows: find the greatest element of the set

(20)Fk={xAkΣ|GkxAkzkand()andiiiiiiiiiirT(xAk)=rT(xAoptk)}.

Remark 24.

It should be clear that, for any k{1,,K}, if zi=zi for all i{1,,k}, then xAopti=xAopti for all i{1,,k}. Nonetheless, if zizi for some i<k, in general it may be that xAoptkxAoptk even if zk=zk (see Example 28).   ♢

5.2 Optimal update of the inputs: the case of a single shared resource

We set out to look for the greatest element of set Fk (defined as in (20)) by proposing a slight generalization of Theorem 13.

Proposition 25.

LetDandCbe complete idempotent semirings,f1,f2:DCresiduated mappings,ψ:DC, andcC. Consider the set

Sψ=def{xD|f1(x)ψ(x)andf2(x)=c}
and the isotone mappingΩ:DD,
Ω(x)=xf1(ψ(x))f2(c).
IfSψ, we havexSψx={xD|Ω(x)=x}.

Proof.

Define the set

S˜ψ={xD|f1(x)ψ(x)andf2(x)c}

and denote χ=xSψx and χ˜=xS˜ψx. Note that

f1(x)ψ(x)andf2(x)cxf1(ψ(x))andxf2(c)(see Def.8)xf1(ψ(x))f2(c)x=xf1(ψ(x))f2(c)=Ω(x).

So, we can rewrite S˜ψ as S˜ψ={xD|x=Ω(x)}, clearly implying χ˜={xD|Ω(x)=x}. Then, it also follows from Remark 4 that χ˜S˜ψ.

Now, assume Sψ. As SψS˜ψ, this implies (x˜S˜ψ)f2(x˜)=c. Taking such an x˜, we have x˜χ˜ and so c=f2(x˜)f2(χ˜) (as f2 is isotone). But we saw above that χ˜S˜ψ, meaning f2(χ˜)c, so f2(χ˜)=c. Therefore, χ˜Sψ and hence χ˜χ. On the other hand, SψS˜ψ implies χχ˜, showing that χ˜=χ.  □

Now, let us once more fix an arbitrary k{1,,K}, and assume xAopti has been determined for each (if any) i{1,,k1}. Seeing that () is equivalent to

by defining the mapping Ψk:ΣΣ,

we can write

Fk={xΣ|GkxΨk(x)andrT(x)=rT(xAoptk)}.

This reveals a correspondence between set Fk and set Sψ from Proposition 25: take D and C both as Σ, f1 as LGk, ψ as Ψk, f2 as rT, and c as rT(xAoptk). So, as long as Fk, the conditions from the proposition are met and, recalling that rTrT=rT, the optimal update of xAk is the greatest fixed point of the (isotone) mapping Γk:ΣΣ,

Next, we must investigate when Fk is nonempty. To that end, considering the set

F˜k={xAkΣ|()andrT(xAk)=rT(xAoptk)},

we want to show that

(22)x_Ak=defxF˜kxF˜k,

i. e., that there exists a (unique) least counter, which we will denote by x_Ak, satisfying both (⋆) and rT(xAk)=rT(xAoptk). Define the mapping Υk:ΣΣ,

Υk(x)=[(β(HRk(Gkx)LRk))aaaaaaaaaaiii(HAkLAk)]rT(xAoptk)x.

Note that, from Proposition 21 and Remark 23, the mapping Π(HAkLAk) is dually residuated, so Υk is well defined. Since xrT(x) for any xΣ, for any element x˜Ak of F˜k it follows that x˜AkrT(x˜Ak)=rT(xAoptk). As, in addition, () is equivalent to

(β(HRk(GkxAk)LRk))(HAkLAk)xAk,

one can see that Υk(x˜Ak)=x˜Ak, i. e., every element of F˜k is a fixed point of Υk; in short, F˜k{xΣ|Υk(x)=x}. Hence, denoting

Υk=def{xΣ|Υk(x)=x}

(which, according to Remark 4, is the least fixed point of Υk), we have

(23)x_AkΥk.

To prove the converse inequality, we proceed to show that Υk is an element of F˜k.

Proposition 26.

Υk=def{xΣ|Υk(x)=x}F˜k.

Proof.

Any xAkΣ such that Υk(xAk)=xAk satisfies

(β(HRk(GkxAk)LRk))(HAkLAk)xAk

and, by consequence (cf. Def. 8), also satisfies (). According to Remark 4, Υk is a fixed point of Υk, therefore () holds for xAk=Υk and it suffices to prove that rT(Υk)=rT(xAoptk).

Υk being a fixed point of Υk implies ΥkrT(xAoptk), so rT(Υk)rT(rT(xAoptk))=rT(xAoptk).

Moreover, rT(xAoptk) is a fixed point of Υk, as can be seen from the following argument. Since we assume xAopti to be given for each i{1,,k1}, according to () we know fulfills

But note that

so (24) is equivalent to

β(HRk(GkrT(xAoptk))LRk)(HAkrT(xAoptk)LAk)

which, in turn, implies

(β(HRk(GkrT(xAoptk))LRk))(HAkLAk)rT(xAoptk).

This, together with the fact that rT(xAoptk)rT(xAoptk), implies Υk(rT(xAoptk))=rT(xAoptk). Hence, ΥkrT(xAoptk) and, as rT is isotone and rTrT=rT, we have rT(Υk)rT(rT(xAoptk))=rT(xAoptk), which concludes the proof.  □

A direct consequence of Proposition 26 is that x_AkΥk, which, combined with (23), implies

(25)x_Ak=Υk,

thus proving that (22) holds. Isotony of LGk then implies

(26)FkGkx_Akzk.

In case Gkx_Akzk (and hence, according to (26), Fk=), this means the past inputs of Sk itself, combined with the (updated) operation of higher-priority subsystems and with the past inputs of lower-priority ones, make it impossible for Sk to respect zk. As () and rT(xAk)=rT(xAoptk) are irrevocable, we will then seek the least way to relax zk (i. e., look for the least counter zkzk) such that the set

Fzkk={xAkΣ|GkxAkzkand()andaaaiiiiirT(xAk)=rT(xAoptk)}

is nonempty. The solution is given by the following result.

Proposition 27.

The least counterzkzksuch thatFzkkiszk=zk(Gkx_Ak).

Proof.

Taking zk=zk(Gkx_Ak), it can be readily checked that x_AkFzkk, therefore Fzkk; the proof then proceeds by direct analogy with that of Proposition 16.  □

Following the same reasoning as before, we define the mapping Ψzkk:ΣΣ,

with zk=zk(Gkx_Ak). Since we know from Proposition 27 that Fzkk, we can again apply Proposition 25 — only now taking ψ as Ψzkk instead of Ψk — to finally conclude that xAoptk is the greatest fixed point of the (isotone) mapping Γzkk:ΣΣ,

Example 28.

Consider the system from Example 20 (Fig. 8), with S1, S2, and S3 operating under the obtained optimal schedules with respect to references z1, z2, and z3, respectively. Now, suppose new references z1=eδ361δ463δ545δ556δ+, z2=z2, and z3=z3 are received at time T=27. Observing the priority policy, we start by updating the input of S1. Recall that, for k=1, the terms HA1 and HR1 are not well defined and hence are disregarded in (). For the relevant terms, we have LA1=rT(xAopt2)rT(xAopt3)=eδ241δ+ and LR1=(G2rT(xAopt2))(G3rT(xAopt3))=eδ261δ+. Note that LA1 refers to all allocations of the resource by S2 and S3 up to time T=27 — in this case, just one allocation (by S3) at t=25 — and LR1 represents the corresponding resource releases. These two terms combined inform that one instance of the resource is occupied from t=25 until t=27; inequality () accordingly imposes a hard condition on xA1. The second hard condition is to preserve the past inputs of S1 itself, meaning rT(xA1)=rT(xAopt1). For this example, rT(xAopt1)=eδ27εδ+, so the restriction is simply that there can be no firing of xA1 (and hence nor of u1) before or at time T=27. Defining F1 as in (20), through (26) one can check that F1; then, we can directly look for the greatest fixed point of Γ1 (defined as in (21)), which is xAopt1=eδ331δ422δ433δ474δ515δ526δ+(=uopt1). Then, xRopt1=eδ361δ452δ463δ504δ545δ556δ+(=yopt1).

We now proceed to update xA2. We have HA2=xAopt1, HR2=xRopt1, LA2=rT(xAopt3)=eδ241δ+, and LR2=G3rT(xAopt3)=eδ261δ+. Moreover, rT(xAopt2)=eδ27εδ+. We then verify that F2=, so we look for the least z2z2 such that Fz22. According to Proposition 27, we obtain z2=eδ391δ502δ613δ+. Computing the greatest fixed point of Γz22 then yields xAopt2=eδ291δ362δ563δ+(=uopt2) and xRopt2=eδ341δ412δ613δ+(=yopt2). Notice that xAopt2xAopt2 even though z2=z2 (cf. Remark 24).

Finally, for S3 we get HA3=xAopt1xAopt2=eδ291δ332δ363δ424δ435δ476δ517δ528δ569δ+ and HR3=xRopt1xRopt2=eδ341δ362δ413δ454δ465δ506δ547δ558δ619δ+. Recall that, for k=K=3, the terms LA3 and LR3 are not well defined and hence are disregarded in (). With rT(xAopt3)=eδ241δ27εδ+, we have F3, so we compute the greatest fixed point of Γ3 and obtain xAopt3=eδ241δ292δ383δ+(=uopt3). Then, xRopt3=eδ261δ312δ403δ+(=yopt3).

The updated optimal schedules are shown in Fig. 10.   ♢

5.3 Extension to the case of multiple shared resources

Consider the system from Fig. 9, with every subsystem Sk following the optimal schedule with respect to output-reference zk, obtained according to Section 4.3. Suppose that each reference zk is updated to zk at time T (with perhaps zk=zk for some of them). In this section we seek, for each k, the optimal input uoptk which preserves uoptk up to time T and results in the output yoptk that tracks zk as closely as possible, without interfering with the operation of higher-priority subsystems and while respecting the past resource allocations of every resource by lower-priority subsystems.

Figure 10 Updated optimal schedules obtained in Example 28; the gray, black, and crosshatched bars represent the operation of S1{S^{1}}, S2{S^{2}}, and S3{S^{3}}, respectively, whereas the dashed bars are the delays imposed by the resource.

Figure 10

Updated optimal schedules obtained in Example 28; the gray, black, and crosshatched bars represent the operation of S1, S2, and S3, respectively, whereas the dashed bars are the delays imposed by the resource.

As usual, we base the following discussion on a fixed but arbitrary k{1,,K}. Let us denote the counter representing the updated optimal firing schedule for the resource-allocation transition xAk by xAoptk. Arguing as in Section 5.1, the task at hand can be summarized as follows: (i) we must compute uoptk in decreasing order of priority; (ii) when calculating uoptk for k>1, we must consider xAopti for every i{1,,k1} and for all {1,,L}; (iii) when calculating uoptk for k<K, we must consider rT(xAoptj) for every j{k+1,,K} and for all {1,,L}.

Still along the lines of Section 5.1, define the terms

HAk=i=1k1xAopti,HRk=i=1k1xRopti,LAk=j=k+1KrT(xAoptj),LRk=j=k+1K(HjrT(xAoptj)),

which can be explained as in the referred section, only now for each resource . We aim at updating uk without compromising the performance of higher-priority subsystems and, at the same time, ensuring minimal interference from lower-priority subsystems while taking into account their past allocations of all resources. Based on (16), we must consequently respect, for every {1,,L},

(⋆⋆)β(HRkxRkLRk)HAkxAkLAk,

where it is understood that for k=1 (resp. k=K), the degenerate terms HA1 and HR1 (resp. LAK and LRK) are to be neglected.

We can then formulate the problem of optimally updating the input uoptk with respect to a reference zk given at time T as follows: find the greatest element of the set

Mk={ukΣ|GkukzkandrT(uk)=rT(uoptk)iiiand () holds for all{1,,L}}.

Recall from Section 4.3 that we can write xAk=Pkuk and xRk=HkPkuk, with Pk defined as in (19). Applying this to (⋆⋆) gives

(28)β(HRk(HkPkuk)LRk)HAk(Pkuk)LAk,

which is equivalent to

Define the mappings Ψk:ΣΣ,

{1,,L}, and Ψk:ΣΣ,

We can then rewrite Mk as

Mk={xΣ|xΨk(x)andrT(x)=rT(uoptk)}.

Note that xΨk(x) is equivalent to IdΣ(x)Ψk(x), where IdΣ is the identity mapping on Σ. It is trivial to verify that IdΣ is residuated and that IdΣ=IdΣ. Therefore, there exists a correspondence between Mk and Sψ from Proposition 25: take D and C both as Σ, f1 as IdΣ, ψ as Ψk, f2 as rT, and c as rT(uoptk). Provided Mk, the proposition entails that uoptk can be determined by computing the greatest fixed point of the (isotone) mapping Λk:ΣΣ,

Λk(x)=xΨk(x)rT(uoptk).

In order to check whether Mk is nonempty, consider the set

M˜k={ukΣ|() holds for all{1,,L}andiiiiirT(uk)=rT(uoptk)}.

We want to show that

u_k=defxM˜kxM˜k,

i. e., that there exists a (unique) least counter u_k satisfying (⋆⋆) for all {1,,L} and rT(uk)=rT(uoptk). Define, for each {1,,L}, the mapping Υk:ΣΣ,

and also the mapping Υk:ΣΣ,

Υk(x)=xrT(uoptk)=1LΥk(x).

Since (⋆⋆) is equivalent to (28) which, in turn, is equivalent to

one can see that for any element u˜k of M˜k we have Υk(u˜k)u˜k for all . As u˜krT(u˜k)=rT(uoptk), it actually holds that Υk(u˜k)=u˜k, implying M˜k{xΣ|Υk(x)=x}. Hence, denoting

Υk=def{xΣ|Υk(x)=x}

we have

u_kΥk.

By arguments parallel to those put forth in Section 5.2, it can be shown that the converse inequality also holds, so we have

u_k=Υk.

Analogously to (26), this leads to the conclusion that

MkGku_kzk.

In case Gku_kzk, we look for the least counter zkzk such that the set

Mzkk={ukΣ|GkukzkandrT(uk)=rT(uoptk)iiiand () holds for all{1,,L}}.

is nonempty. A straightforward adaptation of Proposition 27 gives the solution zk=zk(Gku_k).

Following the same reasoning as before, we define the mapping Ψzkk:ΣΣ,

with zk=zk(Gku_k). We can then once more apply Proposition 25, only now taking ψ as Ψzkk instead of Ψk, which leads to the conclusion that uoptk is the greatest fixed point of the (isotone) mapping Λzkk:ΣΣ,

Λzkk(x)=xΨzkk(x)rT(uoptk).

6 Conclusion

This paper solves the problem of ensuring that a number of TEGs competing for the use of shared resources operate optimally (in a just-in-time sense) even in face of changes in their output-references. The proposed method assumes a prespecified priority policy on the component TEGs, and the optimal inputs are computed under the rule that the operation of lower-priority subsystems cannot interfere with the performance of higher-priority ones. However, when higher-priority subsystems recompute their inputs after a change in the reference signal occurs, they need of course to respect past resource allocations by lower-priority subsystems. We also study the case in which the limited availability of the resources renders it impossible to respect the updated output-reference for one or more of the subsystems. In this case, we show how to relax such references in an optimal way so that the ultimately obtained inputs lead to tracking them as closely as possible. The results are illustrated through simple examples. Exploiting the generality of the method and applying it to a larger, more realistic case study is a subject for future work.


This contribution is dedicated to Prof. Dr.-Ing. Dr. h.c. Michael Zeitz on the occasion of his 80th birthday.


Funding source: Deutsche Forschungsgemeinschaft

Award Identifier / Grant number: RA 516/14-1

Funding statement: Financial support from Deutsche Forschungsgemeinschaft (DFG) via grant RA 516/14-1 is gratefully acknowledged.

References

1. B. Addad, S. Amari and J.-J. Lesage. Networked conflicting timed event graphs representation in (max,+) algebra. Discrete Event Dynamic Systems, 22(4):429–449, 2012.10.1007/s10626-012-0136-0Search in Google Scholar

2. X. Allamigeon, V. Bœuf and S. Gaubert. Performance evaluation of an emergency call center: tropical polynomial systems applied to timed petri nets. In Formal Modeling and Analysis of Timed Systems (FORMATS 2015), Springer, 2015.10.1007/978-3-319-22975-1_2Search in Google Scholar

3. F. Baccelli, G. Cohen, G. J. Olsder and J.-P. Quadrat. Synchronization and Linearity: an Algebra for Discrete Event Systems. Wiley, 1992.Search in Google Scholar

4. T. Blyth and M. Janowitz. Residuation Theory. Pergamon press, 1972.Search in Google Scholar

5. W. M. Boussahel, S. Amari and R. Kara. Analytic evaluation of the cycle time on networked conflicting timed event graphs in the (max,+) algebra. Discrete Event Dynamic Systems, 26(4):561–581, 2016.10.1007/s10626-015-0220-3Search in Google Scholar

6. G. Cohen, S. Gaubert and J. P. Quadrat. Asymptotic throughput of continuous timed petri nets. In 34th IEEE Conference on Decision and Control (CDC), New Orleans, LA, USA, 1995.Search in Google Scholar

7. G. Cohen, S. Gaubert and J. P. Quadrat. Algebraic system analysis of timed petri nets. In Idempotency, J. Gunawardena Ed. Collection of the Isaac Newton Institute, pages 145–170, 1998.10.1017/CBO9780511662508.010Search in Google Scholar

8. A Corréïa, A. Abbas-Turki, R. Bouyekhf and A. El Moudni. A dioid model for invariant resource sharing problems. IEEE Transactions on Systems, Man, and Cybernetics, 39(4):770–781, 2009.10.1109/TSMCA.2009.2019867Search in Google Scholar

9. B. Cottenceau, L. Hardouin and J. Trunk. A C++ toolbox to handle series for event-variant/time-variant (max,+) systems. In 15th International Workshop on Discrete Event Systems (WODES’20) — to appear.Search in Google Scholar

10. L. Hardouin, B. Cottenceau, S. Lagrange and E. Le Corronc. Performance analysis of linear systems over semiring with additive inputs. In 9th International Workshop on Discrete Event Systems (WODES), Göteborg, Sweden, 2008.10.1109/WODES.2008.4605920Search in Google Scholar

11. L. Hardouin, B. Cottenceau, Y. Shang and J. Raisch. Control and state estimation for max-plus linear systems. Foundations and Trends in Systems and Control, 6(1):1–116, 2018.10.1561/9781680835458Search in Google Scholar

12. E. Menguy, J.-L. Boimond, L. Hardouin and J.-L. Ferrier. Just-in-time control of timed event graphs: update of reference input, presence of uncontrollable input. IEEE Transactions on Automatic Control, 45(11):2155–2159, 2000.10.1109/9.887652Search in Google Scholar

13. S. Moradi, L. Hardouin and J. Raisch. Optimal control of a class of timed discrete event systems with shared resources, an approach based on the hadamard product of series in dioids. In 56th IEEE Conference on Decision and Control (CDC), Melbourne, Australia, 2017.10.1109/CDC.2017.8264373Search in Google Scholar

14. G. Schafaschek, S. Moradi, L. Hardouin and J. Raisch. Optimal control of timed event graphs with resource sharing and output-reference update. In 15th International Workshop on Discrete Event Systems (WODES’20) — to appear.Search in Google Scholar

15. T. J. J. van den Boom and B. De Schutter. Modelling and control of discrete event systems using switching max-plus-linear systems. Control Engineering Practice, 14(10):1199–1211, 2006.10.1016/j.conengprac.2006.02.006Search in Google Scholar

Received: 2020-04-08
Accepted: 2020-06-04
Published Online: 2020-07-03
Published in Print: 2020-07-26

© 2020 Schafaschek et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Scroll Up Arrow