Fixed point of some Markov operator of Frobenius-Perron type generated by a random family of point-transformations in Rd

Abstract: Existence of xed point of a Frobenius-Perron type operator P : L1 −→ L1 generated by a family {φy}y∈Y of nonsingular Markov maps de ned on a σnite measure space (I, Σ, m) is studied. Two fairly general conditions are established and it is proved that they imply for any g ∈ G = {f ∈ L1 : f ≥ 0, and ‖f‖ = 1}, the convergence (in the norm of L1) of the sequence {Pg}j=1 to a unique xed point g0. The general result is applied to a family of C1+α-smooth Markov maps in Rd.


Introduction
Let be given a randomly perturbed semi-dynamical system that evolves according to the rule: where {φy} y∈Y is a family of nonsingular Markov maps de ned on a subset I ⊆ R d (bounded or not), d ≥ , and {ξ j } ∞ j= is a sequence of identically distributed independent Y-valued random elements, where Y is a Polish metric space (i.e., a complete separable metric space).
Investigation of the asymptotic properties of such a semi-dynamical system leads to the study of the convergence of the sequence {P j } ∞ j= of iterates of some Frobenius-Perron type operator P which is Markov operator, i.e. Pf = f , and Pf ≥ if f ≥ , acting in L (Markov operator of F-P type, in short). More precisely, let g ≥ , g = , if Prob(x ∈ B) df = B g dm (m denotes the Lebesgue measure on I), then Prob(x j ∈ B) = B P j g dm, where P is the Markov operator of F-P type de ned by (3.1) (see Proposition 3.1).
We establish two fairly general conditions: conditions (3.H1) and (3.H2), and prove that under those conditions the system in question evolves to a stationary distribution. That is, the sequence {P j } ∞ j= converges (in the norm of L ) to a unique xed point g ∈ G (Th. 3.3). The two conditions are probabilistic analogues of conditions (3.H1) and (3.H2) in [1], respectively. Actually, if φy = φ, y ∈ Y, that is if x j = φ ξ j (x j− ) = φ(x j− ) is a deterministic semi-dynamical system (φ is a xed Markov map), then we get two mentioned conditions given in [1].
As an application of this general result we show that the randomly perturbed semi-dynamical system generated by a family of C +α -smooth Markov maps in R d evolves to a unique stationary density (Th. 4.2). Similar problems were considered by several authors: eg. [2][3][4], and the references therein.

Preliminaries
Let (I, Σ, m) be a σnite atomless (non-negative) measure space. Quite often the notions or relations occurring in this paper (in particular, the considered transformations) are de ned or hold only up to the sets of m-measure zero. Henceforth we do not mention this explicitly.
The restriction of a mapping τ : X → Y to a subset A ⊆ X is denoted by τ |A and the indicator function of a set A by 1 A .
Let τ : We give a few de nitions. The following kind of transformations is considered in this paper: De nition 2.1. A nonsingular transformation φ from I into itself is said to be a piecewise invertible i (2.M1) one can nd a nite or countable partition π = {I k : k ∈ K} of I, which consists of measurable subsets (of I) such that m(I k ) > for each k ∈ K, and sup{m(I k ) : k ∈ K} < ∞, here and in what follows K is an arbitrary countable index set; (2.M2) for each I k ∈ π, the mapping φ k = φ |I k is one-to-one of I k onto J k = φ k (I k ) and its inverse φ − k is measurable.

De nition 2.2.
A piecewise invertible transformation φ is said to be a Markov map i its corresponding partition π satis es the following two conditions: (2.M3) π is a Markov partition i.e., for each k ∈ K, (2.M4) φ is indecomposable (irreducible) with respect to π i.e., for each (j, k) ∈ K there exists an integer n > such that I k ⊆ φ n (I j ).
In what follows we denote by · the norm in L = L (I, Σ, m) and by G = G(m) the set of all (probabilistic) densities i.e., G df = {g ∈ L : g ≥ , and g = }.
Let τ : I → I be a nonsingular transformation. Then the formula where dm f = f dm, and d dm denotes the Radon-Nikodym derivative, de nes a linear operator from L into itself. It is called the Frobenius-Perron operator (F-P operator, in short) associated with τ [5,6].
Formula (2.1) is equivalent to the following one: From the de nition of Pτ it follows that it is a Markov operator, i.e., Pτ is a linear operator and for any f ∈ L (m) with f ≥ , Pτ f ≥ and Pτ f = f .
The last equality follows immediately from the second formula equivalent to (2.1) if one puts A = I. Further, Pτ G ⊆ G, and Pτ is a contraction, i.e. Pτ ≤ .

De nition 2.3.
In what follows we consider a family {φy} y∈Y of Markov maps such that: (2.My ) there exists a partition π of I such that πy = π for each y ∈ Y , where πy is a Markov partition associated with φy.
For j ≥ , and y , ..., y j ∈ Y , we denote y(j) = (y j , ..., y ) and then we set Clearly, φ y(j) : I → I is a Markov map. Its Markov partition is given by It consists of the sets of the form: By Def. 2.3, π y( ) = πy = π and therefore I y( ) k( ) = I k ; consequently φ y( )k( ) = (φy ) |I k = φ y k and, according to (2.M2), φ yk is one-to-one mapping of I k onto J y k = φ yk (I k ). We have to adjust the indecomposable condition (2.M4) to the new case when a single Markov map φ is replaced by a family {φy} y∈Y of Markov maps. We propose the following condition (see note at the end of Rem. 2.4): From the properties of φ y(r) it follows that the formula de nes an absolutely continuous measure which is concentrated on J y(r) k(r) (i.e., m y(r)k(r) (A) = m y(r)k(r) (A ∩ J y(r) k(r) )), and whose Radon-Nikodym derivative satis es d m y(r)k(r) dm > a.e. on J y(r) k(r) . To see the latter property of the measure m y(r)k(r) , note rst that if a.e., because We put (r = , , ...) and nally Then the F-P operator P y(r) of the Markov map φ y(r)k(r) can be written in the following form 3) and (2.5) it follows that for any f ∈ L , f ≥ , the following equalities hold: (2. My ) there is a Markov map φỹ such that for each y ∈ Y: (a) πy ≺ πỹ , i.e., for each V ∈ πy , there exists U ∈ πỹ which contains V , and (b) for each V ∈ πy , φy(V) is a union of a number of U ∈ πỹ.
In this situation each φ y(j) = φy j • φy j− • · · · • φ is de ned on the interval of the form: The following family can serve as a simple example: We close this section with the following criterion of the convergence in L of the iterates P n of Markov operator. It is used in the proof of Th. 3.3.
Then there exists exactly one P-xed point g ∈ G such that lim j→∞ P j g = g , for all g ∈ G.

Convergence theorem
Let {φy} y∈Y be a family of Markov maps in the sense of Def. 2.3, Σ(Y) − σ-algebra of all Borel-measurable subsets of Y (where Y is a Polish metric space), p a probability measure on (Y , Σ(Y) ), and Py the F-P operator of the Markov map φy.

We put
Pf df = Py f dp(y) for f ∈ L (m). (3.1) It follows from the de nition, and the fact that the F-P operator Py : L (m) → L (m) (given by (2.8)) is a Markov operator, that P : L (m) → L (m) is also a Markov operator.
In general case we have Operator P is called, in this note, the Markov operator of F-P type.
Let It turns out that if the initial probability distribution is absolutely continuous, then the probability distribution of each random vector x j , de ned by (3.3), is also absolutely continuous: for all B ∈ Σ(I) where P j is the j-th iterate of the Markov operator P of F-P type de ned by (3.1).
The convergence of the sequence {P j } of the iterates of the Markov operator P of F-P type associated with {x j } is established under two general conditions. We are going now to formulate the rst of them.

Now we give the following:
De nition 3.2. A density g ∈ G, belongs to G(C * ), < C * < ∞, i there exist constants C y(j) (g) ≥ , y(j) ∈ Y j , j ≥ j (g), such that the following two conditions are satis ed: (a) A y(j)k (g) ≤ C y(j) (g)a y(j)k (g) a.e. [p j ], j ≥ j (g), and (b) lim sup j→∞ ln C y(j) (g) dp j < C * .
Having de ned the set G(C * ) we are in a position to formulate the rst condition: (3.H1) (Distortion Inequality for the family P y(j) , y(j) ∈ Y j ) There exists a constant < C * < ∞ such that the set G(C * ) de ned by Def. 3.2 contains a subset dense in G.
The theorem below states that the semi-dynamical system given by (3.3) evolves to a stationary distribution under the above two conditions.  H2). Then there exists exactly one P-xed point g ∈ G, that is Pg = g , such that lim j→∞ P j g = g , for all g ∈ G.
Proof. The point is to show that for each r ≥ , the function is a function for P which, under condition (3.H1), plays the role of h from Theorem 2.5. That is, it satis es the relation lim j→∞ (P j+ r g − u r ) − = for all g ∈ G. (3.8) To this end note that by condition (3.H1) there exists a subset G ⊆ G(C * ) dense in G. Thus for any g ∈ G there exists, by Def. 3.2 (a), j = j (g) such that the following inequalities hold: for each j ≥ j , all y(j) ∈ Y j , any I k ∈ π and m × m a.e. (x, y) ∈ I k × I k . These inequalities imply the following estimate: C − y(j) (g)F rw(r) (P y(j) g) ≤ P w(r) P y(j) g ≤ C y(j) (g)F rw(r) (P y(j) g) (3.10) for every r ≥ , j ≥ j and all w(r) ∈ Y r , y(j) ∈ Y j ; where C y(j) (g) are constants involved in Def. 3.2, and F rw(r) is de ned by the following formula: In the last formulaσ w(r)k(r) and I w(r− ) are de ned by (3.6) and (2.4), respectively. To see it note that from (3.9) we obtain , or , according as x ∈ J w(r) k(r) , or x ∈ I \ J w(r) k(r) . Integrating the above inequalities with respect to x on J w(r) k(r) and multiplying byσ w(r)k(r) (y), then summing the resulting inequalities with respect to all k(r) and nally using equality (2.8) one gets the desired double inequality (3.10).
Integrating the above inequalities with respect to w( r) = (w(r), w(r)), and y(j), using Jensen's inequality and condition (b) of Def. 3.2, and applying formulas (2.2) together with (3.2), give: where u r is de ned by (3.7).
The last inequality implies that (3.8) holds. This is so because G ⊆ G is dense, and P is a contraction. Thus we have proved that for each r ≥ , u r indeed plays the role of h from Theorem 2.5 for P; possibly the trivial one, if u w( r) dp r = . To exclude the trivial possibility we have to assume the existence of a nontrivial function u r for P, for some r ≥ , that is condition (3.H2). Then by Theorem 2.5 we have lim j→∞ P j g = g , for all g ∈ G. From this and the inequality g − Pg ≤ g − P j+ g + P j g − g for all g ∈ G, it follows that Pg = g , i.e. the density g is P-invariant. This nishes the proof of the theorem. A C +α -smooth Markov map φ, < α ≤ , means a Markov map in the sense of Def. 2.2 and such that: the partition π of φ consists of domains, and the restriction φ k , of φ to any I k ∈ π, is a C +α -di eomorphism.

An application to a family {φ
In this section we consider a family {φy} y∈Y of C +α -smooth Markov maps which satisfy the following C +α -variant of the so-called Reńyi's Condition (see e.g. [9] or [10]): (4.Hy ) Let {φy} y∈Y be a family of C +α -smooth Markov maps. There exist constants C ,y(r) > , y(r) ∈ Y r , such that for k(r) ∈ K r , r = , , ..., and all I k ∈ π one has: where σ y(r)k(r) is de ned by (2.6), and J y(r) k(r) = φ y(r)k(r) (I y(r− ) k(r) ). Furthermore, the constants C ,y(r) > satisfy the following condition: lim sup j→∞ C ,y(j) dp j < ∞.
Let {φy} y∈Y be a given family of C +α -smooth Markov maps, and let {π y(r) : y(r) ∈ Y r , r = , , ...} be a family of partitions whose elements are de ned by (2.4). We assume that this family has the following generating property: diam(I y(j) k(j+ ) ) α dp j = .
We are going now to examine the convergence of {P j g} under conditions (3.H2) (4.Hy [a, b]), and (4.Hy ). We show that condition (4.Hy [a, b]) together with condition (4.Hy ) implies condition (3.H1). Then under (3.H2) one gets the thesis of Th. 3.3. It turns out that one can take as a dense subset occurring in condition (3.H1) the following: De nition 4.1. We denote by Gα , < α ≤ , the class of all densities g ∈ G satisfying the following three conditions: (a) spt(g) df = {x ∈ I : g(x) > } is a sum of a number of I k ∈ π; (b) for each I k ∈ π, g |I k ∈ C +α (I k ), and |g(x) − g(y)| ≤ C(g) g(y)|x − y| α for all x, y ∈ spt(g) ∩ I k ; where C(g) is a constant depending on g.

Example 4.5.
The second case of stochastic perturbation is the following x j (x, ω) = ζ j (ω)φ(x j− ), for j = , , .... In that case stochastic perturbation appears in a multiplicative way (it is the so called parametric noise). Such a perturbation changes essentially the statistical behaviour of the system. It illustrates the example: Let φy(x) = y tan(x), y ∈ Y = {b, }; and p (ζ j = b) = − a, and p (ζ j = ) = a, for j = , , ..., where b > , and < a < .