Optimal control of timed event graphs with resource sharing and output-reference update

Abstract Timed event graphs (TEGs) are a subclass of timed Petri nets that model synchronization and delay phenomena, but not conflict or choice. We consider a scenario where a number of TEGs share one or several resources and are subject to changes in their output-reference signals. Because of resource sharing, the resulting overall discrete event system is not a TEG. We propose a formal method to determine the optimal control input for such systems, where optimality is in the sense of the widely adopted just-in-time criterion. Our approach is based on a prespecified priority policy for the TEG components of the overall system. It builds on existing control theory for TEGs, which exploits the fact that, in a suitable mathematical framework (idempotent semirings such as the max-plus or the min-plus algebra), the temporal evolution of TEGs can be described by a set of linear time-invariant equations.


Introduction
In this paper, we consider a scenario where several discrete event subsystems, each modeled by a timed event graph (TEG), share one or more resources and where the reference signals for the subsystems may change unexpectedly. TEGs are a subclass of timed Petri nets. They are characterized by the fact that each place has precisely one upstream and one downstream transition and all arcs have weight one. TEGs can model synchronization and delay phenomena but not conflict, choice, or the sharing of resources. Therefore, the overall system class investigated in this contribution is more general than the subclass of TEGs. We argue that it models a wide range of application scenarios, e. g., in manufacturing or transportation. We aim at determining the optimal control input, where optimality is understood in the sense of the widely adopted just-in-time criterion. In particular, the aim is to fire all input transitions as late as possible while guaranteeing that the firing of output transitions is not later than specified by the respective reference signal. For example, in a manufacturing context, the firing of an input transition could correspond to the provisioning of raw material, while the firing of an output transition could model the completion of a workpiece. In general, a just-in-time policy aims at satisfying customer demands while minimizing internal stocks. We solve this optimal control problem for a fixed prioritization of subsystems, but allow updates of the reference signals.
It is a well-known fact that in a suitable mathematical framework, namely an idempotent semiring (or dioid) setting such as the max-plus or the min-plus algebra, the evolution of TEGs can be described by linear equations (see [3] for a thorough coverage). Based on such linear dioid models, an elaborate control theory has become available. Given an a-priori known reference signal, it mostly focuses on optimality in the sense of the just-in-time criterion and considers feedforward and (output or state) feedback control. For a tutorial introduction to this control framework, the reader may refer to [11].
In some applications, it may be necessary to update the reference for the system's output during run-time, for instance when customer demand is increased and a new production objective must be considered. In [12], a strategy has been presented to optimally update the input in face of such changes in the output-reference. In case the new reference encodes unachievable requirements, the authors show how to relax it such that optimal control for the relaxed reference leads to firing times of the output transitions that are as close as possible to (but possibly later than) the originally desired ones.
Systems of practical interest often involve limited resources that are shared among different subsystems. As examples, one can think of an automated manufacturing cell where the same tool/robot may be required in several steps of the production process, or of computational tasks competing for the use of a fixed number of processors. TEGs do not allow for concurrency or choice and hence are inapt to model such resource-sharing phenomena. Overcoming this limitation has motivated several efforts in the literature, with a predominant focus on modeling and analysis. In [7,6], a modeling strategy for continuous timed Petri nets is proposed where conflict places are handled by the use of priority rules. In [8], constraints due to resource sharing are translated into additional inequalities in the system model. [1] models conflicting TEGs by maxplus time-varying equations; the models are restricted to safe conflict places. [5] relaxes the safety hypothesis on the conflict places and studies cycle time evaluation on conflicting TEGs with multiple shared resources. Works focusing on control of TEGs with resource sharing are less abundant. In [15], the authors show that concurrency can be incorporated into switching max-plus linear systems models and apply model predictive control techniques to obtain the optimal switching sequence. In [13], the modeling and control of a number of TEGs that share resources is addressed. Obviously, because of resource sharing, the overall system is no longer a TEG. Under a prespecified priority policy, the authors show how to compute the optimal (just-in-time) input for each subsystem with respect to its individual output-reference.
In this paper, we propose a formal method to obtain the optimal control inputs in face of changes in the outputreferences for TEGs that share resources under a given priority policy, thus merging the results from [12] with those of [13]. This paper represents an extended version of a recent conference paper [14]. It extends the contributions of [14] in the following ways: the results are presented more didactically, with more detailed explanations of some crucial steps. Most significantly, we generalize the method to the case of an arbitrary number of shared resources (Section 5.3 is entirely new), whereas in [14] only the case of a single shared resource is explicitly covered; as a prerequisite thereto, we also formalize the extension of the results from [13], originally only presented for at most two shared resources, to the case of multiple shared resources -Section 4.3 originates here. The examples in this paper, although still simple, are more general and elucidative than those from [14]. We also enhance the presentation of preliminary concepts, providing a brief survey of the theoretical background and making the paper largely self-contained.
Prospective applications for the proposed approach include emergency call centers (as studied, e. g., in [2], which, in turn, is based on [7,6]) where the arrival of highpriority calls may render it necessary to reschedule the answers to lower-priority ones, or manufacturing scenarios where changes in the demand of high-priority products will require a re-adjustment of resource allocations by processing steps related to lower-priority products.
We consider a set of TEGs operating under optimal schedules with respect to their individual outputreferences and to the priority policy; supposing the outputreference of one or more of the subsystems is updated during run-time, we show how to optimally update all their inputs so that their outputs are as close as possible to the corresponding new references and the priority policy is still observed. In case the performance limitation of the subsystems, combined with the limited availability of the resources, make it impossible to respect some of the new references, we also provide the optimal way to relax such references so that the ultimately obtained inputs lead to tracking them as closely as possible.
The examples presented along this paper serve solely the purpose of illustrating and helping to clarify the results. Due to space limitations and also to the fact that existing computational tools are not in pace with the stateof-the-art theoretical results proposed here, we do not present a more comprehensive example. The proposed method can, however, be applied to larger, more general systems of practical relevance.
The paper is organized as follows. Section 2 summarizes well-known facts on idempotent semirings. In Section 3, we adapt existing results on the control of TEGs with output-reference update to the idempotent semiring used in this paper. Section 4 provides an overview of previous results on modeling and control of TEGs with shared resources. The major purpose of these three sections is making the paper as self-contained as possible. In Section 5, the main contributions of the paper are presented; namely, we formulate and solve the problem of determining the optimal control inputs for TEGs with shared resources in face of changes in the output-references. Section 6 presents the conclusions and final remarks.

Preliminaries
In this section, we present a summary of some basic definitions and results on idempotent semirings and timed event graphs; for an exhaustive discussion, the reader may refer to [3]. We also touch on some topics from residuation theory and control of TEGs (see [4] and [11], respectively).

Idempotent semirings
An idempotent semiring (or dioid) D is a set endowed with two binary operations, denoted ⊕ (sum) and ⊗ (product), such that: ⊕ is associative, commutative, idempotent (i. e., (∀a ∈ D) a ⊕ a = a), and has a neutral (zero) element, denoted ε; ⊗ is associative, distributes over ⊕, and has a neutral (unit) element, denoted e; the element ε is absorbing for ⊗ (i. e., (∀a ∈ D) a ⊗ ε = ε). As in conventional algebra, the product symbol ⊗ is often omitted. An order relation can be defined over D by Note that ε is the bottom element of D, as (∀a ∈ D) ε ⪯ a. An idempotent semiring D is complete if it is closed for infinite sums and if the product distributes over infinite sums. For a complete idempotent semiring, the top element is defined as ⊤ = ⨁ x∈D x, and the greatest lower bound (or infimum) operation, denoted ∧, by ∧ is associative, commutative, and idempotent, and we with the minimum operation as ⊕ and conventional addition as ⊗, forms a complete idempotent semiring called min-plus algebra, denoted ℤ min , in which ε = +∞, e = 0, and ⊤ = −∞. Note that in ℤ min we have 2⊕5 = 2, so 5 ⪯ 2; the order is reversed with respect to the conventional order over ℤ. ⋄

Remark 2 ([3]
). The set of n×n-matrices with entries in an idempotent semiring D, endowed with sum and product operations defined by for all i, j ∈ {1, . . . , n}, forms a complete idempotent semiring, denoted D n×n . Its unit element (or identity matrix) is the n×n-matrix with entries equal to e on the diagonal and ε elsewhere; the zero (resp. top) element is the n×n-matrix with all entries equal to ε (resp. ⊤). The definition of order (1) implies, for any A, B ∈ D n×n , It is possible to deal with nonsquare matrices in this context (including, in particular, row and column vectors) by suitably padding them with ε-rows or columns; this is done only implicitly, as it does not interfere with the relevant parts of the results of operations between matrices. ⋄

Remark 3.
The composition of two isotone mappings is isotone. ⋄ Remark 4. Let Π be an isotone mapping over a complete idempotent semiring D, and let Y = {x ∈ D | Π(x) = x} be the set of fixed points of Π. ⋀ y∈Y y (resp. ⨁ y∈Y y) is the least (resp. greatest) fixed point of Π. ⋄ Algorithms exist which allow to compute the least and greatest fixed points of certain isotone mappings over complete idempotent semirings. In particular, the algorithm presented in [11] is applicable to the relevant mappings considered in this paper.
In a complete idempotent semiring D, the Kleene star operator on a ∈ D is defined as a * = ⨁ i≥0 a i , with a 0 = e and a i = a i−1 ⊗ a for i > 0.

Remark 5.
The implicit equation x = ax⊕b over a complete idempotent semiring D admits x = a * b as least solution. This applies, in particular, to the case in which x, b ∈ D n and a ∈ D n×n (see [3] and Remark 2). ⋄

Semirings of formal power series
Let s = {s(t)} t∈ℤ be a sequence over ℤ min . The δ-transform of s is a formal power series in δ with coefficients in ℤ min and exponents in ℤ, defined by We denote both the sequence and its δ-transform by the same symbol, as no ambiguity will occur. Since multiplication by δ can be seen as a backward shift operation.
Definition 6. The set of formal power series in δ with coefficients in ℤ min and exponents in ℤ, with addition and multiplication defined by is a complete idempotent semiring, denoted ℤ min [[δ]]. Note that the order in ℤ min [[δ]] is induced by the order in ℤ min , i. e., s ⪯ s ὔ ⇔ (∀t ∈ ℤ) s(t) ⪯ s ὔ (t). ⋄ In this paper we will use sequences to represent the number of firings of transitions in TEGs, so that each term s(t) refers to the accumulated number of firings of a certain transition up to time t. Naturally, this interpretation carries over to the terms of a series s corresponding to the δ-transform of such a sequence. A series thus obtained is clearly nonincreasing (in the order of ℤ min , which, as pointed out before, is the reverse of the standard order of ℤ), meaning s(t − 1) ⪰ s(t) for all t. We will henceforth refer to such series as counters.

TEG models in idempotent semirings
A timed Petri net is a tuple (P, T, A, w, h, v), where P is a finite set of places (graphically represented by circles), T a finite set of transitions (represented by bars), A ⊆ (P × T) ∪ (T × P) a set of arcs connecting places to transitions and transitions to places, w a weight function assigning a positive integer weight to every arc, and h a function assigning a nonnegative holding time to each place. In the following, holding times will be restricted to be integers. Furthermore, the function v assigns to each place a nonnegative integer number of tokens residing initially in this place. For any p ∈ P and t ∈ T, if (p, t) ∈ A, we say that p is an upstream place of t, and t is a downstream transition of p; analogously, if (t, p) ∈ A, t is said to be an upstream transition of p, and p is a downstream place of t. The dynamics of a timed Petri net is governed by the following rules: (i) a transition t can fire if all its upstream places p contain at least w (p, t) tokens that have resided there for at least h(p) time units; (ii) if a transition t fires, it removes w (p, t) tokens from each of its upstream places p and deposits w (t,p ) tokens in each of its downstream placesp .
Timed event graphs (TEGs) are timed Petri nets in which each place has exactly one upstream and one downstream transition and all arcs have weight 1. In a TEG, we can distinguish input transitions (those that are not affected by the firing of other transitions), output transitions (those that do not affect the firing of other transitions), and internal transitions (those that are neither input nor output transitions). In this paper, we will limit our discussion to SISO TEGs, i. e., TEGs with only one input and one output transition, which we denote respectively by u and y; internal transitions are denoted by x i . An example of a SISO TEG is shown in Fig. 1.
A TEG is said to be operating under the earliest firing rule if every internal and output transition fires as soon as it is enabled.
With each transition x i , we associate a sequence {x i (t)} t∈ℤ , for simplicity denoted by the same symbol, where x i (t) represents the accumulated number of firings of x i up to and including time t. Similarly, we associate sequences {u(t)} t∈ℤ and {y(t)} t∈ℤ with transitions u and y, respectively. Considering the TEG from Fig. 1 operating under the earliest firing rule, in conventional algebra we have i. e., the number of firings of transition x 1 up to time t is the minimum between the number of firings of transition u up to time t and the number of firings of transition x 2 up to time t − 2 (because the place connecting x 2 to x 1 has holding time 2) plus 2 (as the place connecting x 2 to x 1 has initially 2 tokens).
In ℤ min , the number of firings of transition x 1 can be conveniently rewritten as which, through the δ-transform, can be expressed in Σ as We can obtain similar relations for x 2 and y and, defining the vector x = In general, a TEG can be described by implicit equations over Σ of the form From Remark 5, the least solution of (2) is given by where G = CA * B is often called the transfer function of the system. For instance, for the system from Fig. 1 we obtain the (scalar) transfer function G = eδ 3 (2δ 5 ) * .

Residuation theory
Residuation theory provides, under certain conditions, greatest (resp. least) solutions to inequalities such as

Definition 8.
An isotone mapping f : D → C, with D and C complete idempotent semirings, is said to be residuated if for all y ∈ C there exists a greatest solution to the inequality f (x) ⪯ y. This greatest solution is denoted f ♯ (y), and the mapping and f ♭ (y) yield its greatest and least solutions, respectively.

Theorem 9 ([4]). Mapping f as in Def. 8 is residuated if and only if there exists a unique isotone mapping f
where Id C and Id D are the identity mappings on C and D, respectively. ⋄ [3] and Remark 2). ⋄

Optimal control of TEGs
Assume that a TEG to be controlled is modeled by equations (2) and that an output-reference z ∈ Σ is given. Under the just-in-time paradigm, we aim at firing the input transition u the least possible number of times while guaranteeing that the output transition y fires, by each time instant, at least as many times as specified by z. In other words, we seek the greatest u (in the order of Σ) such that y = G⊗u ⪯ z. Based on (3) and Remark 10, the solution is directly obtained by Example 12. For the TEG from Fig. 1, suppose it is required that transition y fires once at time t = 43, twice at t = 47, and three times at t = 55, meaning the accumulated number of firings of y should be e (= 0) for t ≤ 42, 1 for 43 ≤ t ≤ 46, 3 for 47 ≤ t ≤ 54, and 6 for t ≥ 55. This is represented by the output-reference z = eδ 42 ⊕1δ 46 ⊕3δ 54 ⊕6δ +∞ . Applying (4), we get u opt = eδ 38 ⊕ 1δ 41 ⊕ 2δ 43 ⊕ 3δ 46 ⊕ 4δ 51 ⊕ 6δ +∞ , and the corresponding optimal output is y opt = G ⊗ u opt = eδ 41 ⊕ 1δ 44 ⊕ 2δ 46 ⊕ 3δ 49 ⊕ 4δ 54 ⊕ 6δ +∞ . One can verify that y opt ⪯ z. These computations can be performed with the aid of the C++ toolbox introduced in [9]. We interpret the place with holding time 3 between x 1 and x 2 , initially empty, as the operation of the system, and the bottom place with holding time 2 between x 2 and x 1 , with two initial tokens, as a double-capacity resource. Under this interpretation, the firings of transitions x 1 and x 2 represent resource-allocation and resource-release events, respectively. This paves the way for the examples of Sections 4 and 5, where the resource will be shared with other (sub)systems. The optimal schedule obtained above can be displayed in a chart as shown in Fig. 2, where each row corresponds to one instance of the resource. ⋄

Optimal control of TEGs with output-reference update
The material of this section is a dual version, adapted to the point of view of counters, of the results from [12]. In practice, it may be necessary to update the reference for the output of a system during run-time, for instance when customer demand is increased and a new production objective must be taken into account. For a system like the one from Example 12, let reference z be updated to a new one, z ὔ , at time T. The problem at hand is to find the input u ὔ opt which optimally tracks z ὔ without, however, changing the inputs given up to time T. Define the mapping r T : Σ → Σ, Our objective can then be restated as follows: find the greatest element u ὔ opt of the set where u opt is the optimal input with respect to reference z, computed as in (4). The following theorem provides, given that certain conditions are met, a way to compute this greatest element.
An obvious correspondence between F and S can be established by taking D and C both as Σ, f 1 as L G (which is well known to be residuated -see Remark 10), c 1 as z ὔ , f 2 as r T , and c 2 as r T (u opt ).

Remark 14.
Mapping r T as defined in (5) In order to check for nonemptiness of F , let us consider the set i. e., the set of counters that up to and including time T are identical to u opt . Consider now Since r T ∘ r T = r T and therefore Example 15. For the system from Example 12 ( Fig. 1) operating according to the optimal input obtained for outputreference z, suppose that at time T = 40 a new demand is received: three firings of y are now required at t = 54 (instead of at t = 55). This translates to z ὔ = eδ 42 ⊕ 1δ 46 ⊕ 3δ 53 ⊕ 6δ +∞ . In order to determine whether F ̸ = , following (7) From Theorem 13 (and recalling that r +∞ , and hence y ὔ opt = eδ 41 ⊕1δ 43 ⊕2δ 46 ⊕3δ 48 ⊕4δ 53 ⊕6δ +∞ . The updated optimal schedule is shown in Fig. 3, to be interpreted as explained in Example 12. ⋄ In case G ⊗ u z ὔ (and hence F = ), this means the past inputs make it impossible for the system to respect z ὔ . Intuitively, having implemented a just-in-time policy u opt for a reference z up to time T may make it impossible to satisfy a more demanding new reference z ὔ . Since the condition r T (u) = r T (u opt ) cannot be relaxed, in order to have a solution we must then increase z ὔ ; more precisely, we wish to find the least counter z ὔὔ ⪰ z ὔ such that is not empty. The following result provides the answer.
Take now an arbitraryz ὔὔ ⪰ z ὔ such that Fz ὔὔ ̸ = , and take any v ∈ Fz ὔὔ . Clearly v ∈ F and hence A correspondence between F z ὔὔ and S can be established analogously to that between F and S, only taking c 1 as z ὔὔ (instead of z ὔ ). Applying Theorem 13 and recalling that r Note that in case F ̸ = we have z ὔὔ = z ὔ ⊕ (G ⊗ u) = z ὔ and therefore recover solution (6).

Modeling and optimal control of TEGs with resource sharing
We now turn our attention to systems in which a number of TEGs share one or multiple resources. We first focus on the simple case of a single shared resource (Sections 4.1 and 4.2); the discussion is based on [13], where the authors also present the more general case of two shared resources. Here, we take it one step further and explicitly generalize the approach to the case of arbitrarily many shared resources (Section 4.3).

Modeling of TEGs with one shared resource
Consider a system consisting of TEGs S 1 , . . . , S K sharing a resource (with arbitrary capacity), as illustrated in Fig. 5. H k represents the internal dynamics of S k . β may, in general, be a TEG (or, in simple cases, just a single place) describing the capacity of the resource as well as the minimal delay between release and allocation events. Clearly, the overall system is no longer a TEG, as there are places with several upstream and/or several downstream transitions. For simplicity, let us assume that input transitions (u k ) are connected to resource-allocation transitions (x k A ) via a single place with zero delay and no initial tokens, the same being true for the connection between resourcerelease transitions (x k R ) and output transitions (y k ). These assumptions will be dropped in Section 4.3.
It is not possible to model systems exhibiting resourcesharing phenomena by linear equations such as (2). Considering a system like the one from Fig. 5, in order to express the relationship among counters x k A and x k R , k ∈ {1, . . . , K}, the Hadamard product of series is introduced ([10]).
Definition 18. The Hadamard product of s 1 , s 2 ∈ Σ, written s 1 ⊙ s 2 , is the counter defined as follows: This operation is commutative, distributes over ⊕ and ∧, has neutral element eδ +∞ , and s ε is absorbing for it (i. e., (∀s ∈ Σ) s ⊙ s ε = s ε ). ⋄ Consider a join structure (i. e., a place with two or more incoming transitions) as shown in Fig. 6. At any time instant t, the accumulated number of firings of γ, in conventional algebra, cannot exceed that of λ 1 and λ 2 combined, which translates to λ 1 ⊙ λ 2 ⪯ γ. Similarly, for a fork Figure 6: A join and a fork structure. structure (i. e., a place with two or more outgoing transitions) such as the one shown in Fig. 6, the accumulated number of firings of γ 1 and γ 2 combined -again, in conventional algebra -can never exceed that of λ, meaning λ ⪯ γ 1 ⊙ γ 2 .
Generalizing these ideas allows us to write, for the system from Fig. 5,

Optimal control of TEGs with one shared resource
For a system like the one from Fig. 5, competition for the resource is, in general, going to make it impossible for all subsystems to concurrently follow a just-in-time schedule with respect to their individual output-references. One way to settle the dispute is introducing a priority policy among the subsystems. We henceforth assume, without loss of generality, that subsystem S k has higher priority than S k+1 , for all k ∈ {1, . . . , K−1}. The priority policy is based on a simple rule: for each k ∈ {2, . . . , K} and for all j ∈ {1, . . . , k − 1}, S k cannot interfere with the performance of S j . Let the input-output behavior of each S k , ignoring all other subsystems, be described by y k = G k ⊗ u k -which, according to the assumptions made above, is equivalent to x k R = G k ⊗ x k A -and assume that corresponding references z k are given. The subsystem with highest priority, S 1 , is free to use the resource at will; therefore, we can effectively neglect all other subsystems and simply compute its optimal input by u 1 opt = x 1 A opt = G 1 ⃝ \ z 1 (cf. Section 2.5). For S 2 , we must compute the optimal input under the restriction that the optimal behavior of S 1 is unchanged; based on (9), this means we must respect In fact, we want to determine the greatest x 2 A -and thus also the corresponding u 2 -satisfying both G 2 ⊗ u 2 ⪯ z 2 and (10); seeing that (10) implies the following result comes in handy.

Proposition 19 ([10]
). For any a ∈ Σ, the mapping Π a : From Proposition 19, inequality (11) leads to writing which, in turn, implies Since for any s 1 , s 2 ∈ Σ it holds that s 1 ⪯ s 2 ⇔ s 1 = s 1 ∧ s 2 , one can see that (13) is equivalent to The greatest x 2 A satisfying (14), x 2 A opt , is the greatest fixed point (provided it exists) of the mapping Φ 2 : Σ → Σ, Notice that Φ 2 consists in a succession of order-preserving operations (Hadamard product ⊙ and its residual ⊙ ♯ , leftdivision ⃝ \ , and infimum ∧), which, in turn, can be seen as the composition of corresponding isotone mappings (for instance, following the notation of Proposition 19, s 1 ⊙ s 2 corresponds to Π s 1 (s 2 ), and similarly for the other operations). Therefore, according to Remark 3 Φ 2 is also isotone; Remark 4 then ensures the existence of its greatest fixed point, which yields the desired optimal solution x 2 A opt (= u 2 opt ). Using the same procedure, we obtain, for each k, and, defining a mapping Φ k by analogy with (15), its greatest fixed point provides x k A opt and, therefore, also u k opt .

Modeling and optimal control of TEGs with multiple shared resources
Consider, as before, a system comprising K TEGs S 1 , . . . , S K , but now suppose they share L resources, as shown in Fig. 9. Similarly to Section 4.1, each β ℓ , ℓ ∈ {1, . . . , L}, is a TEG (or possibly just a place) describing the capacity as well as the minimal delay between release and allocation events of resource ℓ. We denote by x kℓ A (resp. x kℓ R ) the transition -and associated counter -representing the allocation (resp. release) of resource ℓ by subsystem S k . Accordingly, H kℓ denotes the internal dynamics of S k between x kℓ A and x kℓ R . As opposed to Section 4.1, here we consider that there may be also some dynamics between input transitions (u k ) and resource-allocation transitions for the first resource (x k1 A ), modeled by TEGs (or, again, simply single places) called P k1 , as well as between resourcerelease transitions for the last resource (x kL R ) and output transitions (y k ), called P k(L+1) . The TEG (or single place) describing the dynamics between the release of resource ℓ − 1 and the allocation of resource ℓ by S k (i. e., between x k(ℓ−1) R and x kℓ A ) is denoted P kℓ . Through the same reasoning as applied in Section 4.1, it is straightforward to conclude that, for each ℓ ∈ {1, . . . , L}, the relationship among counters x kℓ A and x kℓ R must respect The optimal (just-in-time) schedule for the usage of the resources is sought under the same priority policy as in Sec-tion 4.2. Let the input-output behavior of each S k , considering the resources and ignoring all other subsystems, be described as usual by y k = G k ⊗ u k , and let us again assume corresponding references z k to be given. For S 1 , we can simply compute the optimal input by u 1 opt = G 1 ⃝ \ z 1 . Based on u 1 opt , we can obtain the optimal firing schedules for the remaining transitions of S 1 . For instance, we have x 11 A opt = P 11 ⊗ u 1 opt and x 11 R opt = H 11 ⊗ x 11 A opt . In general, for each ℓ ∈ {2, . . . , L} we can then successively compute In order to determine the optimal input u 2 opt for S 2i. e., the greatest u 2 such that G 2 ⊗ u 2 ⪯ z 2 -while guaranteeing no interference with the optimal behavior of S 1 , based on (16) we must have, for each ℓ ∈ {1, . . . , L}, Notice that, for a just-in-time input u 2 computed so that (17) holds for ℓ = 1, it follows that x 21 A = P 21 ⊗ u 2 , and hence In fact, the optimal input we seek is such that (17) holds for every ℓ and, furthermore, such that a just-in-time behavior is enforced throughout the system, implying x 2ℓ A = P 2ℓ ⊗ x 2(ℓ−1) R for all ℓ ∈ {2, . . . , L}. This means we can express any x 2ℓ A in terms of u 2 ; defining the terms we have x 2ℓ A = P 2ℓ ⊗ u 2 , and hence x 2ℓ Then, we can rewrite (17) as which, proceeding similarly to Section 4.2, leads to Define, for each ℓ ∈ {1, . . . , L}, the mapping Φ 2ℓ : Σ → Σ, We seek the greatest u 2 such that u 2 ⪯ G 2 ⃝ \ z 2 and (∀ℓ ∈ {1, . . . , L}) u 2 ⪯ Φ 2ℓ (u 2 ). This amounts to looking for the greatest fixed point of the (isotone) mapping Φ 2 : Σ → Σ, The same arguments presented above can be applied to determine u k opt for an arbitrary k ∈ {1, . . . , K}. Defining and expressing each x kℓ A and x kℓ R in terms of u k , from (16) we obtain, for each ℓ ∈ {1, . . . , L}, Then, proceeding as before and defining the mapping for each ℓ ∈ {1, . . . , L}, the greatest u k such that u k ⪯ G k ⃝ \ z k and u k ⪯ Φ kℓ (u k ) for all ℓ ∈ {1, . . . , L} is given by the greatest

Remark 22 ([10]
). Given two counters x 1 , x 2 ∈ Σ, the series s ∈ ℤ min [[δ]] defined by (∀t ∈ ℤ) s(t) = x 1 (t) − x 2 (t) is not necessarily a counter; x 1 ⊙ ♯ x 2 is the greatest counter less than or equal to s (in the order of ℤ min [[δ]]). Similarly, provided x 2 ∈ Σ (cf. Proposition 21), x 1 ⊙ ♭ x 2 is the least counter greater than or equal to s. ⋄ Note that, in Proposition 21, the restriction of a to the subset Σ is necessary for Π a to meet the conditions of Theorem 11. In fact, if a(τ) = ε for some Remark 23. Since we take a term like ηδ τ to mean that a transition has accumulated η firings by time τ, it is reasonable to assume that the counters u, x i , and y (cf. Section 2.3) are elements of Σ. Note, additionally, that for any finite subset B ⊆ Σ one has ⨂ s∈B s ∈ Σ and ⨀ s∈B s ∈ Σ. ⋄

Optimal control of TEGs with resource sharing and output-reference update
In this section, as the main contribution of this paper, we incorporate the ideas discussed in Section 3 to the class of systems studied in Section 4 by showing how to determine the optimal (just-in-time) control inputs in face of changes in the output-references for TEGs that share resources under a given priority policy. We again emphasize that, in this setting, the overall system is not a TEG. This section is structured similarly to Section 4, starting with the simple case of a single shared resource (Sections 5.1 and 5.2) and then generalizing to the case of multiple resources (Section 5.3).

Problem formulation: the case of a single shared resource
Consider the system from Fig. 5 and assume every subsystem S k is operating optimally with respect to its own output-reference z k , according to the priority-based strategy introduced in Section 4.1. Now, suppose that at time T each S k has its reference z k updated to z k ὔ (with the possibility that z k ὔ = z k for some of them). Analogously to Section 3, we seek, for each k, the input u kὔ opt which leads the corresponding output to optimally track z k ὔ while preserving the input u k opt up to time T. The crucial difference is that now the priority scheme must be observed and, furthermore, the past resource allocations by subsystems with lower priority must also be respected. Such allocations are relevant -despite having occurred before time T -be-cause the respective resource releases may take place after T, thus influencing the availability of the resource in the meantime.
For the purpose of the discussion to follow, let us fix an arbitrary k ∈ {1, . . . , K}. When updating the input of S k , we require minimal interference from lower-priority subsystems (i. e., all S j with j ∈ {k + 1, . . . , K}). This means that we have to respect past resource allocations in these subsystems, but may ignore future ones. Recall that In sum, (i) we must compute x kὔ A opt in decreasing order of priority, i. e., start from k = 1 and proceed up to k = K; (ii) when calculating x kὔ A opt for k > 1, we must consider x iὔ A opt for every i ∈ {1, . . . , k − 1}; (iii) when calculating x kὔ A opt for k < K, we must consider r ♯ T (x j A opt ) for every j ∈ {k + 1, . . . , K}. It will be convenient to define the following terms: H k A combines the counters x iὔ A opt of all subsystems S i with priority higher than that of S k , referring to the alreadyupdated optimal schedules of resource-allocation transitions x i A with respect to the corresponding updated references z iὔ ; accordingly, H k R combines the counters x iὔ R opt = G i ⊗ x iὔ A opt representing the respective resource-release events. In a similar way, L k A combines the counters r of all subsystems S j with priority lower than that of S k , representing the past firings (up to time T) of resourceallocation transitions x j A and neglecting their firings after time T, whereas L k R gathers the respective resourcerelease events by combining the counters G j ⊗ r ♯ T (x j A opt ); it should be emphasized that, even though we only consider the resource allocations by S j up to time T, the respective resource-release events may take place after T, so in general one may have G j ⊗ r Thus, based on (9) and on the foregoing discussion, in order to update u k = x k A without compromising the performance of higher-priority subsystems and, at the same time, ensuring minimal interference of lower-priority subsystems while taking into account their past resource allocations, we must respect where it is understood that for k = 1 (resp. k = K), the degenerate terms H 1 A and H 1 R (resp. L K A and L K R ) are to be neglected.
The problem of determining the new optimal input u kὔ opt (= x kὔ A opt ) with respect to a reference z k ὔ given at time T can be formulated as follows: find the greatest element of the set Remark 24. It should be clear that, for any k ∈ {1, . . . , K},

Optimal update of the inputs: the case of a single shared resource
We set out to look for the greatest element of set F k (defined as in (20) Proof. Define the set and denote χ = ⨁ x∈S ψ x and χ = ⨁ x∈ S ψ x. Note that So, we can rewrite S ψ as S ψ = {x ∈ D | x = Ω(x)}, clearly implying χ = ⨁{x ∈ D | Ω(x) = x}. Then, it also follows from Remark 4 that χ ∈ S ψ . Now, assume S ψ ̸ = . As S ψ ⊆ S ψ , this implies (∃ x ∈ S ψ ) f 2 ( x) = c. Taking such an x, we have x ⪯ χ and so c = f 2 ( x) ⪯ f 2 ( χ) (as f 2 is isotone). But we saw above that χ ∈ S ψ , meaning f 2 ( χ) ⪯ c, so f 2 ( χ) = c. Therefore, χ ∈ S ψ and hence χ ⪯ χ. On the other hand, S ψ ⊆ S ψ implies χ ⪯ χ, showing that χ = χ. Now, let us once more fix an arbitrary k ∈ {1, . . . , K}, and assume x iὔ A opt has been determined for each (if any) i ∈ {1, . . . , k − 1}. Seeing that (⋆) is equivalent to by defining the mapping Ψ k : Σ → Σ, This reveals a correspondence between set F k and set S ψ from Proposition 25: take D and C both as Σ, f 1 as L G k , ψ as Ψ k , f 2 as r T , and c as r T (x k A opt ). So, as long as F k ̸ = , the conditions from the proposition are met and, recalling that r ♯ T ∘ r T = r ♯ T , the optimal update of x k A is the greatest fixed point of the (isotone) mapping Γ k : Σ → Σ, Next, we must investigate when F k is nonempty. To that end, considering the set we want to show that i. e., that there exists a (unique) least counter, which we will denote by x k A , satisfying both (⋆) and r T ( Note that, from Proposition 21 and Remark 23, the mapping Π (H k A ⊙L k A ) is dually residuated, so ϒ k is well defined. Since x ⪰ r T (x) for any x ∈ Σ, for any element As, in addition, (⋆) is equivalent to (which, according to Remark 4, is the least fixed point of ϒ k ), we have To prove the converse inequality, we proceed to show that ⋀ ϒ k is an element of F k .
and, by consequence (cf. Def. 8), also satisfies (⋆). According to Remark 4, ⋀ ϒ k is a fixed point of ϒ k , therefore (⋆) holds for x k A = ⋀ ϒ k and it suffices to prove that is a fixed point of ϒ k , as can be seen from the following argument. Since we assume x iὔ A opt to be given for each i ∈ {1, . . . , k − 1}, according to (⋆) we know But note that which, in turn, implies This, together with the fact that r and, as r T is isotone and r T ∘ r , which concludes the proof.
A direct consequence of Proposition 26 is that x k A ⪯ ⋀ ϒ k , which, combined with (23), implies thus proving that (22) holds. Isotony of L G k then implies In case G k ⊗ x k A z k ὔ (and hence, according to (26), F k = ), this means the past inputs of S k itself, combined with the (updated) operation of higher-priority subsystems and with the past inputs of lower-priority ones, make it impossible for S k to respect z k ὔ . As (⋆) and r T (x k A ) = r T (x k A opt ) are irrevocable, we will then seek the least way to relax z k ὔ (i. e., look for the least counter z k ὔὔ ⪰ z k ὔ ) such that the set is nonempty. The solution is given by the following result.
Proposition 27. The least counter z k ὔὔ ⪰ z k ὔ such that Proof. Taking z k ὔὔ = z k ὔ ⊕(G k ⊗x k A ), it can be readily checked that x k A ∈ F k z kὔὔ , therefore F k z kὔὔ ̸ = ; the proof then proceeds by direct analogy with that of Proposition 16.

Extension to the case of multiple shared resources
Consider the system from Fig. 9, with every subsystem S k following the optimal schedule with respect to outputreference z k , obtained according to Section 4.3. Suppose Figure 10: Updated optimal schedules obtained in Example 28; the gray, black, and crosshatched bars represent the operation of S 1 , S 2 , and S 3 , respectively, whereas the dashed bars are the delays imposed by the resource.
that each reference z k is updated to z k ὔ at time T (with perhaps z k ὔ = z k for some of them). In this section we seek, for each k, the optimal input u kὔ opt which preserves u k opt up to time T and results in the output y kὔ opt that tracks z k ὔ as closely as possible, without interfering with the operation of higher-priority subsystems and while respecting the past resource allocations of every resource by lowerpriority subsystems.
As usual, we base the following discussion on a fixed but arbitrary k ∈ {1, . . . , K}. Let us denote the counter representing the updated optimal firing schedule for the resource-allocation transition x kℓ A by x kℓὔ A opt . Arguing as in Section 5.1, the task at hand can be summarized as follows: (i) we must compute u kὔ opt in decreasing order of priority; (ii) when calculating u kὔ opt for k > 1, we must consider x iℓὔ A opt for every i ∈ {1, . . . , k − 1} and for all ℓ ∈ {1, . . . , L}; (iii) when calculating u kὔ opt for k < K, we must consider r ) for every j ∈ {k + 1, . . . , K} and for all ℓ ∈ {1, . . . , L}.
Still along the lines of Section 5.1, define the terms which can be explained as in the referred section, only now for each resource ℓ. We aim at updating u k without compromising the performance of higher-priority subsystems and, at the same time, ensuring minimal interference from lower-priority subsystems while taking into account their past allocations of all resources. Based on (16), we must consequently respect, for every ℓ ∈ {1, . . . , L}, where it is understood that for k = 1 (resp. k = K), the degenerate terms H 1ℓ A and H 1ℓ R (resp. L Kℓ A and L Kℓ R ) are to be neglected.
We can then formulate the problem of optimally updating the input u kὔ opt with respect to a reference z k ὔ given at time T as follows: find the greatest element of the set M k = u k ∈ Σ | G k ⊗ u k ⪯ z k ὔ and r T (u k ) = r T (u k opt ) and (⋆⋆) holds for all ℓ ∈ {1, . . . , L} .
Recall from Section 4.3 that we can write x kℓ A = P kℓ ⊗u k and x kℓ R = H kℓ ⊗ P kℓ ⊗ u k , with P kℓ defined as in (19). Applying this to (⋆⋆) gives which is equivalent to Define the mappings Ψ kℓ : Σ → Σ, We can then rewrite M k as M k = x ∈ Σ | x ⪯ Ψ k (x) and r T (x) = r T (u k opt ) .
Note that x ⪯ Ψ k (x) is equivalent to Id Σ (x) ⪯ Ψ k (x), where Id Σ is the identity mapping on Σ. It is trivial to verify that Id Σ is residuated and that Id ♯ Σ = Id Σ . Therefore, there exists a correspondence between M k and S ψ from Proposition 25: take D and C both as Σ, f 1 as Id Σ , ψ as Ψ k , f 2 as r T , and c as r T (u k opt ). Provided M k ̸ = , the proposition entails that u kὔ opt can be determined by computing the greatest fixed point of the (isotone) mapping Λ k : Σ → Σ, In order to check whether M k is nonempty, consider the set M k = u k ∈ Σ | (⋆⋆) holds for all ℓ ∈ {1, . . . , L} and r T (u k ) = r T (u k opt ) .
We want to show that i. e., that there exists a (unique) least counter u k satisfying (⋆⋆) for all ℓ ∈ {1, . . . , L} and r T (u k ) = r T (u k opt ). Define, for each ℓ ∈ {1, . . . , L}, the mapping ϒ kℓ : Σ → Σ, and also the mapping ϒ k : Σ → Σ, Since (⋆⋆) is equivalent to (28) which, in turn, is equivalent to one can see that for any element u k of M k we have ϒ kℓ ( u k ) ⪯ u k for all ℓ. As u k ⪰ r T ( u k ) = r T (u k opt ), it actually holds that ϒ By arguments parallel to those put forth in Section 5.2, it can be shown that the converse inequality also holds, so we have Analogously to (26), this leads to the conclusion that M k ̸ = ⇔ G k ⊗ u k ⪯ z k ὔ .
In case G k ⊗u k z k ὔ , we look for the least counter z k ὔὔ ⪰ z k ὔ such that the set M k z kὔὔ = u k ∈ Σ | G k ⊗ u k ⪯ z k ὔὔ and r T (u k ) = r T (u k opt ) and (⋆⋆) holds for all ℓ ∈ {1, . . . , L} .
is nonempty. A straightforward adaptation of Proposition 27 gives the solution z k ὔὔ = z k ὔ ⊕ (G k ⊗ u k ).
Following the same reasoning as before, we define the mapping Ψ k z kὔὔ : Σ → Σ, with z k ὔὔ = z k ὔ ⊕ (G k ⊗ u k ). We can then once more apply Proposition 25, only now taking ψ as Ψ k z kὔὔ instead of Ψ k , which leads to the conclusion that u kὔ opt is the greatest fixed point of the (isotone) mapping Λ k z kὔὔ : Σ → Σ,

Conclusion
This paper solves the problem of ensuring that a number of TEGs competing for the use of shared resources operate optimally (in a just-in-time sense) even in face of changes in their output-references. The proposed method assumes a prespecified priority policy on the component TEGs, and the optimal inputs are computed under the rule that the operation of lower-priority subsystems cannot interfere with the performance of higher-priority ones. However, when higher-priority subsystems recompute their inputs after a change in the reference signal occurs, they need of course to respect past resource allocations by lowerpriority subsystems. We also study the case in which the limited availability of the resources renders it impossible to respect the updated output-reference for one or more of the subsystems. In this case, we show how to relax such references in an optimal way so that the ultimately obtained inputs lead to tracking them as closely as possible.
The results are illustrated through simple examples. Exploiting the generality of the method and applying it to a larger, more realistic case study is a subject for future work.