The operator approach to the truncated multidimensional moment problem

We study the truncated multidimensional moment problem with a general type of truncations. The operator approach to the moment problem is presented. A way to construct atomic solutions of the moment problem is indicated.


Introduction
Let us introduce some notations. As usual, we denote by R, C, N, Z, Z+ the sets of real numbers, complex numbers, positive integers, integers and non-negative integers, respectively. By Z n + we mean Z+ × . . . × Z+, and R n = R × . . . × R, where the Cartesian products are taken with n copies. Let k = (k , . . . , kn) ∈ Z n + , t = (t , . . . , tn) ∈ R n . We denote by t k the monomial t k t k . . . t kn n , and we let |k| = k + . . . + kn. We also denote by B(R n ) the set of all Borel subsets of R n .
Let K be an arbitrary nite subset of Z n + , and S = (s k ) k∈K an arbitrary set of real numbers. The truncated multidimensional moment problem consists of nding a (non-negative) measure µ on B(R n ) such that t k dµ(t) = s k , ∀k ∈ K.
The multidimensional moment problem (both the full and the truncated versions) turned out to be much more complicated than its one-dimensional prototype [1], [2], [13]. An operator-theoretical interpretation of the (full) multidimensional moment problem was given by Fuglede in [7]. It should be noticed that the operator approach to moment problems was introduced by Naimark in 1940-1943 and then developed by many authors, see historical notes in [28]. Elegant conditions for the solvability of the multidimensional moment problem in the case of the support on semi-algebraic sets were given by Schmüdgen in [17], [18]. Another conditions for the solvability of the multidimensional moment problem, using an extension of the moment sequence, were given by Putinar and Vasilescu, see [16], [21]. Developing the idea of Putinar and Vasilescu, we presented di erent conditions for the solvability of the two-dimensional moment problem and proposed an algorithm (which essentially consists of solving of linear equations) for a construction of the solutions set [25]. An analytic parametrization for all solutions of the two-dimensional moment problem in a strip was given in [27]. Another approach to multidimensional and complex moment problems (including truncated problems), using extension arguments for *-semigroups, has been developed by Cichoń, Stochel and Szafraniec, see [3] and references therein. Still another approach for the two-dimensional moment problem was proposed norm. For a set M ⊆ H we denote by M the closure of M in the norm of H. By Lin M we mean the set of all linear combinations of elements from M, and span M := Lin M. By E H we denote the identity operator in H, i.e. E H x = x, x ∈ H. In obvious cases we may omit the index H. If H is a subspace of H, then P H = P H H denotes the orthogonal projection of H onto H .

Necessary conditions for the solvability of the moment problem
Consider the following operator W j on Z n + : for j = , . . . , n. Thus, the operator W j increases the j-th coordinate.
De nition 1. A nite subset K ⊂ Z n + is said to be admissible, if the following conditions hold: for some a j ∈ { , . . . , n}, and We provide below some important examples of admissible sets.
Suppose that for an admissible nite set K ⊂ Z n + the moment problem (1), with and some S = (s k ) k∈K , has a solution µ. Let us investigate which properties of the data S this fact yields. The rst property is the usual positivity condition. For practical purposes, we will assume below that the elements of K are indexed by a single index i.e., we assume with ρ + = |K|. Consider an arbitrary polynomial of the following form: Evaluating |p| dµ, we get where We now suppose that for an admissible nite set K ⊂ Z n + the moment problem (1), with K = K + K and some S = (s k ) k∈K , is given and condition (8) holds (we do not require that the moment problem is solvable). Let L denote the set of polynomials of the form (7). Observe that L forms a vector space. Consider the following functional on L: where p ∈ L is as in Equation (7), and q ∈ L has the same form as p, but with β j (∈ C) instead of α j . The functional ·, · is sesquilinear, p, p ≥ , and p, q = q, p . Elements u, v ∈ L are said to be equivalent, if u − v, u − v = . By [p] L we denote the equivalence class which contains p ∈ L. The equivalence classes form a nite-dimensional Hilbert space H. The Hilbert space H is said to be associated to the moment problem (1).
We now return to the case of the solvable moment problem. Consider the space L µ which consists of (the equivalence classes of) complex-valued measurable functions f such that |f (t)| dµ < ∞. The equivalence class in L µ will be denoted by [·] L µ . Denote by T l the following multiplication operator: with the domain D l := {f (t) ∈ L µ : t l f (t) ∈ L µ }.
Observe that Since the operator WT l W − is well de ned, the following implication holds: The latter implication is equivalent to the following one: or, equivalently,   j∈Ω l α j s k j +km = , ∀m ∈ Ω l , for some α j ∈ C   ⇒   j∈Ω l α j s k j + e l +km+ e l = , ∀m ∈ Ω l   .
Denote Γ l = s k j +km m,j∈Ω l , Γ l = s k j + e l +km+ e l m,j∈Ω l , l = , , . . . , n, (16) where the indices from Ω l are taken in the increasing order. We obtain the second necessary condition of the solvability: Ker Γ l ⊆ Ker Γ l , l = , , . . . , n.
We summarize our results in the following theorem.
Theorem 1. Let the moment problem (1) with K = K + K, for an admissible nite set K and some S = (s k ) k∈K , be given. Then conditions (8), (17) hold.
The operator approach to the moment problem. The dimensional stability.
Suppose that for an admissible nite set K ⊂ Z n + the moment problem (1), with K = K + K and some S = (s k ) k∈K , is given. Fix an ordering of the elements in K as in Equation (6). Assume that conditions (8), (17) hold. We may construct the associated Hilbert space H, as in the previous section. For l = , . . . , n we consider the following operators: (17) the operator M l is well-de ned. Moreover, it is linear and symmetric. In particular, we have Operators M l are said to be associated to the moment problem (1).

Proposition 1.
Let the moment problem (1) with K = K +K, for an admissible nite set K and some S = (s k ) k∈K , be given and conditions (8), (17) hold. Suppose that there exist commuting self-adjoint operators M j ⊇ M j (j = , . . . , n) in a nite-dimensional Hilbert space H ⊇ H. Then the moment problem (1) has a solution.
Proof. Assume the existence of commuting self-adjoint operators M j ⊇ M j (j = , . . . , n) in a nitedimensional Hilbert space H ⊇ H. Observe that in this case operators M j are bounded and de ned on the whole space H. Choose an arbitrary k = (k , . . . , kn) ∈ K\{ }. We shall use the notations from De nition 1.
Using induction one can verify that In particular, we obtain that Since the operators M j commute, we may rearrange the product in (21). Clearly, the operator W i appears k i times in (3). Thus, we get We can now construct a solution to the moment problem. For an arbitrary k = (k , . . . , kn) ∈ (K + K), k = k + k , k = (k , . . . , k n ), k = (k , . . . , k n ) ∈ K, we may write where where E(δ) is the spectral measure of a commuting tuple M , . . . , Mn. Consequently, we get a solution µ of the moment problem. 2 We now present some explicit numerical conditions which ensure that the associated operator M l is selfadjoint. Denote Observe that Theorem 2. Let the moment problem (1) with K = K + K, for an admissible nite set K and some S = (s k ) k∈K , be given and conditions (8), (17) hold. Fix an arbitrary l ∈ Z ,n . For every j ∈ Ω l , denote by α t (j) (t ∈ Ω l ) an arbitrary complex solution of the following linear algebraic system: The associated operator M l is self-adjoint if and only if for every j ∈ Ω l the following relation holds: Proof. The operator M l is self-adjoint if and only if In fact, in the latter case the operator M l is symmetric and de ned on the whole nite-dimensional space H. We shall give a simple but general argument. It will be also used later. Let h be an arbitrary vector from H, and G := Lin{g k } k∈ Ω , where Ω is an arbitrary subset of { , . . . , ρ}. We denote by y the orthogonal projection of h onto G.
Since y belongs to G, it has the following form: Conversely, let α t ∈ C (t ∈ Ω) be an arbitrary solution of the linear system (30). Set y := t∈ Ω α t g t . By (30) we conclude that ( y, Therefore h − y ⊥ G, and y = y. Applying the above argument to the case h = g j (j ∈ Ω l ) and Ω = Ω l , we conclude that y j := t∈Ω l α t (j)g t is the projection of g j onto D(M l ). Condition (29) is equivalent to the following condition: The latter condition can be rewritten in the form (28). 2 Suppose that for the associated operator M l (l ∈ Z ,n ) conditions of Theorem 2 hold. From the proof of Theorem 2 it is clear that Therefore Thus, by relations (19),(33) the operator M l is explicitly de ned on each vector g j , j = , . . . , ρ. We shall rewrite this in the following form: with complex coe cients α k (j) (by gathering the coe cients by each g k in the right-hand sides of relations (19), (33)). We now assume additionally that for the associated operator Mr (r ∈ Z ,n ) conditions of Theorem 2 hold, as well. Then with some complex β k (j). By (34) and (35) we come to the following relation: with some complex γ k (j).

Theorem 3.
Let the moment problem (1) with K = K + K, for an admissible nite set K and some S = (s k ) k∈K , be given and conditions (8), (17) Proof. It follows from the preceding arguments. 2
De nition 3. Suppose that for an admissible nite set K ⊂ Z n + the moment problem (1), with K = K + K and some S = (s k ) k∈K , is given and conditions (8), (17)  The dimensional stability can be veri ed explicitly by the given moments. Denote Theorem 5. Let the moment problem (1) with K = K + K, for an admissible nite set K and some S = (s k ) k∈K , be given and conditions (8), (17) hold. For every j ∈ Ω , denote by α t (j) (t ∈ Ω ) an arbitrary complex solution of the following linear algebraic system: The set of moments S is dimensionally stable, if and only if the following relation holds: Proof. Observe that the set of moments S is dimensionally stable, if and only if Denote by y j the projection of the vector g j (j ∈ Ω ) onto H . By the general argument, presented in the proof of Theorem 2, we conclude that y j = t∈Ω α t (j)g t .
It remains to notice that relation (42) is equivalent to the following relation: which can be written as in (41). 2 Suppose that for the moment problem, as in De nition 3, the set S is dimensionally stable. Then operators M l are self-adjoint and de ned on the whole H. Observe that for l, r ∈ { , . . . , n} : l ≠ r, we have In general, it is not clear if the element k j + er =: ks, with s ∈ { , . . . , ρ}, has the property s ∈ Ω l . Thus, we can not apply relation (19) to get [t k j + er+ e l ] L . However, the following theorem holds.

Theorem 6.
Let the moment problem (1) with K = K + K, for a set K as in Relation (5) and some S = (s k ) k∈K , be given. Suppose that conditions (8), (17) hold and that S is dimensionally stable. Then S is completely self-adjoint and the moment problem (1) has a solution.
Proof. In fact, for the type of truncations as in (5), when applying the operator Mr in (43) we do not leave the domain of the operator M l . Therefore we obtain that Thus, we have the completely self-adjoint case. It remains to apply Theorem 4. 2 Thus, in the case of rectangular truncations the dimensional stability (DS) implies the complete selfadjointness (CS): (DS) ⇒ (CS).
It is not clear if implication (44) holds for any admissible truncations. The validity of the inverse implication in (44) is also of interest. Suppose that for an admissible nite set K ⊂ Z n + the moment problem (1), with K = K + K and some S = (s k ) k∈K , is given and conditions (8), (17) hold. De ne the associated Hilbert space H and its subspace H . To verify the dimensional stability, one can use Theorem 5. On the other hand, one can nd projections of elements g j (j ∈ Ω ) on the subspace H , by using an orthonormal basis in H . Thus, we have ρ = . The matrix Γ = (s k j +km ) j,m= has the following form: The non-negativity of Γ can be veri ed directly, by checking that the determinants of all submatrices, standing on the intersections of rows and columns with the same indices, are non-negative. The matrices Γ , Γ , Γ , Γ have the following forms: The linear algebraic equation has the following solution: x , x , x , x are arbitrary complex numbers, x = − x , x = −x + x . It is easy to verify that any solution satis es Γ x = .
On the other hand, the linear algebraic equation has the following solution: x , x , x , x are arbitrary complex numbers, x = −x , x = x − x . Again, one can verify that any solution satis es Γ x = .
Let us apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the sequence g , g , g , g . Notice that all norms and scalar products are calculated by the moments: We obtain an orthonormal basis Moreover, it turned out that g = g − g , g = . (50) It remains to verify that the projections of elements g , g , g , g , g on H coincide with the corresponding elements. For example, For other elements, we proceed in a similar way. Consequently, the sequence S = (s k ) k∈K is dimensionally stable. The operators M and M act in the following way: Therefore The matrices M , M of operators M , M , respectively, for the basis F are: The matrix M has eigenvalues λ = , λ = , with eigenvectors respectively: The matrix M has eigenvalues λ = , λ = , with eigenvectors respectively: Observe that the spectral measure E(δ) in relation (24) can have jumps at points (x, y) with x ∈ {λ , λ }, y ∈ { λ , λ }. The measure support is contained in this set of four points. Thus, the measure µ has at most atoms.
Observe that By (51) we conclude that the solution µ is -atomic, having jumps and at points ( , ) and ( , ), respectively.
An algorithm for the truncated two-dimensional moment problem.
In this section we shall study the case n = of the moment problem (1). We shall give two algoritms: Algorithm 1 and Algorithm 2. These algorithms always give the desired output for any correct input data. Let us brie y describe them. The task of Algorithm 1 is to obtain the matrices of all self-adjoint extensions of the associated operators M l in the corresponding Hilbert space H. We emphasize that the commutativity for the extensions is not assumed. Self-adjoint extensions of any symmetric operator A, with equal de ciency indices, always exist. In the case of the densely de ned operator it follows by von Neumann's formulas. In the general case it follows by generalized von Neumann's formulas (see, e.g., [26,Theorem 3.13]). The input data for Algorithm 2 includes the matrices of two commuting self-adjoint extensions M l ⊇ M l in H. The output is a solution of the moment problem (1). Thus, the moment problem reduces to providing a link between the algorithms. One should extract in some way commuting extensions between those given by Algorithm 1. Su cient conditions for such a successful extraction will be given in Theorem 7. If s = , then the moment problem (1) can not have any solution di erent from µ = . In this case, if all the moments are zero then µ = is a solution, otherwise there are no solutions. Thus, we can exclude the case s = from our further considerations. Algorithm 1 (The construction of all self-adjoint extensions of M l in H). Input: an admissible nite set K ⊂ Z + , K := K + K and a set of prescribed moments S = (s k ) k∈K , with s ≠ , which satis es the necessary conditions (8) and (17). Here we x an ordering of the elements in K as in Equation (6), with k = .
Step 1. Consider the associated Hilbert space H, which is de ned as in the paragraph following formula (8). We shall use the brief notation (25). Although this space consists of abstract elements (the equivalence classes), all numerical calculations will be performed by the basic property (26). For l = , we consider the associated operators M l (see (18)). Observe that Step 2. (The construction of an orthonormal basis in H).
Apply the Gram-Schmidt orthogonalization procedure to the sequence g , g , ..., gρ , removing the linearly dependent elements, if they appear. We get an orthonormal basis By the construction, an element f j is a linear combination of g k s, with explicitly calculated coe cients. Notice that f ≠ .
Step 3. (The parametrization of all linear extensions of M l ).
Observe that M l is de ned on elements g j , j ∈ Ω l (l = , ). At rst, de ne linear operators M l on these elements in the same way. Denote For l = , one should repeat the following procedure.
Choose an arbitrary element g k , k ∈ Ω l . Calculate the norm of its projection on D( M l ). If g k ∈ D( M l ), then we skip this element. Otherwise, we set M l g k := ρ j= (α l;k,j + β l;k,j i)f j , α l;k,j , β l;k,j ∈ R, and extend the domain of M l , using linearity. Then we take another element g k , k ∈ Ω l , and proceed in a similar way. We continue this procedure to de ne M l on the whole H. This completes the procedure for M l . Notice that the case D(M l ) = H was not excluded in the above procedure. The latter case means that the corresponding parameters α l;k,j , β l;k,j are absent.
Step 4. (The calculation of matrices of M l ). Observe that each f j is a linear combination of g k s (by the Gram-Schmidt orthogonalization): and vice versa: Then M l f j = ρ k= c j;k M l g k .
By (52), (54) and (56) we see that M l f j is a linear combination of f k with some coe cients, which may depend linearly on α l;k,j , β l;k,j .
In the basis F, we calculate the matrices M , M of M and M , respectively. The coe cients of M , M may depend linearly on real parameters α l;k,j , β l;k,j .
Step 5. (The extraction of self-adjoint extensions of M l ). The following condition: ensures the self-adjointness of M and M . Equating the corresponding entries of matrices in (57) we obtain linear algebraic systems with complex coe cients for unknown real parameters α l;k,j , β l;k,j . Taking the real and the imaginary parts of these equations, we get linear algebraic systems with real coe cients and real unknowns α l;k,j , β l;k,j . We now present some su cient conditions, which allow to obtain a solution of the moment problem.

Theorem 7.
Let the moment problem (1) with n = , K = K + K, for an admissible nite set K and some S = (s k ) k∈K , be given and conditions (8), (17) has a solution, then the moment problem (1) is solvable.

Remark 1.
Observe that condition (28) of Theorem 2 ensures that one of the associated operators is self-adjoint. Therefore one of the matrices M ,M has no parameters. Equating the corresponding entries of matrices in (58), we obtain a linear algebraic system with complex coe cients and real unknowns. Taking the real and the imaginary parts of these equations we get a linear algebraic system with real coe cients and real unknowns, which can be solved by the Gauss elimination method.
Proof. By Remark 1 we conclude that the operators M and M have commuting self-adjoint extensions in H. Applying Proposition 1 we obtain that the moment problem has a solution. 2 The following algorithm provides a solution to the moment problem, if we have commuting matrices M ,M , selected from those obtained by Algorithm 1. In particular, it can be applied if conditions of Theorem 7 hold. Algorithm 2 (The construction of a solution of the moment problem). Input: commuting matrices M ,M , selected from those obtained by Algorithm 1.
Output: a solution µ of the moment problem. Let us illustrate the above algorithms by the following examples. Order the elements of K as follows: k = ( , ), k = ( , ), k = ( , ), k = ( , ).
The matrix Γ = (s k j +km ) m,j= has the following form: The non-negativity of Γ holds. It is veri ed by checking that the determinants of all submatrices, standing on the intersections of rows and columns with the same indices, are non-negative. Observe that The matrices Γ , Γ , Γ , Γ have the following forms: Therefore conditions (17) hold. Let us apply Algorithm 1.
Step 1. Consider the associated Hilbert space H. Consider the multiplication operators M l as in (18). Notice that M g = g , M g = g , and D(M ) = Lin{g , g }, D(M ) = Lin{g , g }.
Step 2. Let us apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the sequence g , g , g , g . We shall use the property (26). We obtain that and g = g , g = g .
Therefore F := {f , f } is an orthonormal basis in H, and ρ = .
Since g = g , the procedure of the extension is nished.