On eigenvalues of a matrix arising in energy-preserving/dissipative continuous-stage Runge-Kutta methods


               <jats:p>In this short note, we define an <jats:italic>s</jats:italic> × <jats:italic>s</jats:italic> matrix <jats:italic>K<jats:sub>s</jats:sub>
                  </jats:italic> constructed from the Hilbert matrix <jats:inline-formula>
                     <jats:alternatives>
                        <jats:inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphic/j_spma-2021-0101_eq_001.png" />
                        <m:math xmlns:m="http://www.w3.org/1998/Math/MathML" display="inline">
                           <m:mrow>
                              <m:msub>
                                 <m:mrow>
                                    <m:mi>H</m:mi>
                                 </m:mrow>
                                 <m:mi>s</m:mi>
                              </m:msub>
                              <m:mo>=</m:mo>
                              <m:msubsup>
                                 <m:mrow>
                                    <m:mrow>
                                       <m:mrow>
                                          <m:mo>(</m:mo>
                                          <m:mrow>
                                             <m:mfrac>
                                                <m:mn>1</m:mn>
                                                <m:mrow>
                                                   <m:mi>i</m:mi>
                                                   <m:mo>+</m:mo>
                                                   <m:mi>j</m:mi>
                                                   <m:mo>-</m:mo>
                                                   <m:mn>1</m:mn>
                                                </m:mrow>
                                             </m:mfrac>
                                          </m:mrow>
                                          <m:mo>)</m:mo>
                                       </m:mrow>
                                    </m:mrow>
                                 </m:mrow>
                                 <m:mrow>
                                    <m:mi>i</m:mi>
                                    <m:mo>,</m:mo>
                                    <m:mi>j</m:mi>
                                    <m:mo>=</m:mo>
                                    <m:mn>1</m:mn>
                                 </m:mrow>
                                 <m:mi>s</m:mi>
                              </m:msubsup>
                           </m:mrow>
                        </m:math>
                        <jats:tex-math>{H_s} = \left( {{1 \over {i + j - 1}}} \right)_{i,j = 1}^s</jats:tex-math>
                     </jats:alternatives>
                  </jats:inline-formula> and prove that it has at least one pair of complex eigenvalues when <jats:italic>s</jats:italic> ≥ 2. <jats:italic>K<jats:sub>s</jats:sub>
                  </jats:italic> is a matrix related to the AVF collocation method, which is an energy-preserving/dissipative numerical method for ordinary differential equations, and our result gives a matrix-theoretical proof that the method does not have large-grain parallelism when its order is larger than or equal to 4.</jats:p>


Introduction
Let Hs ∈ R s×s be the Hilbert matrix of order s de ned by and H < s its shifted version: In this paper, we consider a matrix Ks de ned by and study whether it has complex eigenvalues or not. This problem is related to the analysis of so-called structure-preserving numerical methods for ordinary di erential equations (ODEs) [1][2][3]. More speci cally, the matrix Ks arises in the analysis of the AVF *Corresponding Author: Yusaku Yamamoto: The University of Electro-Communications, Tokyo, Japan collocation method [4], which is a class of energy-preserving/dissipative numerical methods belonging to continuous-stage Runge-Kutta methods. As will be detailed in the next section, if Ks has only real eigenvalues and is diagonalizable for some s, then the AVF collocation method of order s has large-grain parallelism, which is a desirable property from the viewpoint of high performance computing. Actually, it is possible to analyze this problem in a di erent way, by using the relationship between the AVF collocation method and Gauss-Legendre Runge-Kutta method and exploiting the properties of the latter [5,6]. But a more direct analysis based on matrix theory will be desirable because such an analysis will be applicable also to some generalizations of the AVF collocation method [7]. We therefore deal with this problem in the present paper.
The rest of this paper is structured as follows. In Section 2, we introduce the AVF collocation method and explain how its characteristics are governed by the Hilbert matrix Hs and the matrix Ks. In Section 3, we analyze the eigenvalues of Ks and show that it has at least one pair of complex eigenvalues for s ≥ . Section 4 gives some discussion and conclusion.

The matrices H s and K s and the properties of the AVF collocation method
Let us consider the following system of ODEs: where It is well known that the system (4) is energy-conserving (dH/dt = ) when S is skew-symmetric and dissipative (dH/dt ≤ ) when S is symmetric negative semide nite [2]. If the system (4) is discretized by general-purpose numerical integrators such as Runge-Kutta methods, however, such properties are usually lost. To resolve this problem, structure-preserving numerical integrators that preserve the energyconserving/dissipative property even after discretization have been actively studied [1][2][3]. One of the schemes to derive such integrators in a uni ed manner is continuous-stage Runge-Kutta (CSRK) methods [4,7,8]. In this method, one computes the numerical solution y at time t + h from y at time t by the following formula: Here, A τ,ζ is a polynomial of order s in τ and order s − in ζ de ned by where M is an s × s matrix that de nes a speci c CSRK method, and B ζ = A ,ζ . By choosing A τ,ζ in this way and approximating the integral on the right-hand side of (6) by numerical quadrature, the integral equation (6) is reduced to a nonlinear equation, which can be solved by, for example, the simpli ed Newton method. The characteristics of a speci c CSRK method is determined by the matrix M as follows (see [7] for details). 1. If M is symmetric, the corresponding CSRK method is energy-preserving. If, in addition, M is positive semide nite, the method is also dissipative when applied to dissipative ODEs.
2. If M satis es the following equations, the corresponding CSRK method has order η, i.e., its local error is O(h η+ ): where e k is the kth column of the identity matrix of order s. Note that this is a su cient condition. 3. If the eigenvalues of the matrix Es ∈ R s×s de ned by are all real and Es is diagonalizable, then the corresponding CSRK method has large-grain parallelism. This means that the linear simultaneous equations of order sN to be solved at each time step is decomposed into s independent linear simultaneous equations of order N each.
If we set M = H − s , the condition (9) is satis ed for k = , , . . . , s and the resulting CSRK method has order s. This is called the AVF collocation method [4]. This method is energy-preserving since Hs (and therefore H − s ) is symmetric and also dissipative since Hs (and therefore H − s ) is positive de nite. So, it is of interest to know whether the matrix Es = diag( , , . . . , s )H − s H < s has all real eigenvalues, since then the method also has the desirable property of large-grain parallelism. Actually, it is known that the matrix Es corresponding to the AVF collocation method has complex eigenvalues for s ≥ , because it is similar to the Es matrix of Gauss-Legendre Runge-Kutta method and the latter is known to have at most one real eigenvalue [6]. However, this is an indirect proof and a more direct proof based on matrix theory is desirable, since such a proof could be extended to modi cations of the AVF collocation method, for which the matrix M is a slight perturbation of H − s . In the next section, we study the existence of we have from Cramer's formula, Note that the numerator of the last expression is the determinant of a matrix obtained by deleting the (s + )th row and the (i+ )th column of H s+ , while the denominator is the determinant of a matrix obtained by deleting the (s + )th row and the 1st column of H s+ . Hence, the numerator and the denominator are the (s + , i + ) and (s + , i) cofactors of H s+ , respectively, and can be written as By inserting these expressions into (12) and using the formula for the inverse of the Hilbert matrix [9]: we have Thus, we have succeeded in computing all the entries of (H < s ) − Hs.

. Existence of complex eigenvalues
Now, let us apply a similarity transformation Cs = DKs D − to K S using a diagonal matrix D = diag( !, !, . . . , s!). Then we obtain where Since Cs is a companion matrix, its characteristic polynomial is given as Note that while the expression (18) for c i was derived for i = , , . . . , s, it is valid also for c , because substituting i = into (18) gives c = , which is the correct coe cient of x s in the characteristic polynomial. To study the reality of the roots of Ps(x) = , we use the following lemma due to Newton [10]. Lemma 3.1 (Newton). Assume that n ≥ and let P(x) = n i= n C i a i x i . Then, for all the roots of the nth order algebraic equation P(x) = to be real, the following inequalities must hold: In our case, the coe cients a i in the lemma are given as Hence, a i a i− a i+ = which shows that the inequality (20) does not hold for any i. Thus, we arrive at the following theorem. This translates into the following corollary on the AVF collocation method.

Discussion
To prove the existence of a complex root of Ps(x) = using Newton's lemma, it is su cient to show that (20) fails to hold for at least one of ≤ i < s. In our case, however, (20) fails to hold for all of ≤ i < s. Thus, it would not be so easy to modify Ks so that (20) holds for all ≤ i < s by introducing only small number of parameters into M. On the other hand, the ratio of a i /(a i− a i+ ) = ( s − i)/( s − i + ) is very close to 1. This suggests that Ks is very close to a matrix with all real eigenvalues¹. Accordingly, it is still an open question whether we can modify Hs so that Ks has all real eigenvalues while retaining the energy-preservation/dissipative properties and the order conditions. Our analysis given in this paper should be a useful guideline in pursuing the research in this direction.