# An efficient finite element method based on dimension reduction scheme for a fourth-order Steklov eigenvalue problem

• Hui Zhang , Zixin Liu and Jun Zhang
From the journal Open Mathematics

## Abstract

In this article, an effective finite element method based on dimension reduction scheme is proposed for a fourth-order Steklov eigenvalue problem in a circular domain. By using the Fourier basis function expansion and variable separation technique, the original problem is transformed into a series of radial one-dimensional eigenvalue problems with boundary eigenvalue. Then we introduce essential polar conditions and establish the discrete variational form for each radial one-dimensional eigenvalue problem. Based on the minimax principle and the approximation property of the interpolation operator, we prove the error estimates of approximation eigenvalues. Finally, some numerical experiments are provided, and the numerical results show the efficiency of the proposed algorithm.

MSC 2010: 34L15; 65M60; 97N20

## 1 Introduction

Fourth-order Steklov eigenvalue problems with eigenvalue parameter in boundary conditions are widely used in mathematics and physics, such as the surface wave research, the stability analysis of mechanical oscillator in a viscous fluid, the study of vibration mode of structure in contact with an incompressible fluid, and so on [1,2,3, 4,5]. The first eigenvalue λ 1 also plays an important role in the positivity-preserving properties for the biharmonic-operator Δ 2 under the boundary conditions ϕ Ω = ( Δ ϕ λ ϕ ν ) Ω = 0 in [6,7].

There are many existing results about the fourth-order Steklov eigenvalue problems, but they mainly focus on the qualitative analysis. Kuttler [8] proved that the first eigenvalue is simple and the corresponding eigenfunction does not change the sign. Ferrero et al. [7] and Bucur et al. [9] studied the spectrum on a bounded domain, and the explicit representation of the spectrum is given when the domain is a ball. Recently, the existence of an optimal convex shape among domains of a given measure is proved in [10], and the Weyl-type asymptotic formula for the counting function of the biharmonic Steklov eigenvalues also is established in [11]. For the numerical methods of the fourth-order Steklov eigenvalue problems, a conforming finite element method was first proposed in [12], then some spectral methods are also developed [13].

As we all know, if the conforming finite element method is directly used to solve a fourth-order problem, the boundary of the element requires the continuity of the first derivative, which not only brings the difficulty of constructing the basis function but also costs a lot of calculation time and memory capacity, especially for some special regions, such as circular region, spherical region, and so on. How to efficiently solve a fourth-order Steklov eigenvalue problem in a circular domain? To the best of our knowledge, there are few reports on using some efficient numerical to solve this problem. Thus, the aim of this article is to propose an effective finite element method based on a dimension reduction scheme for a fourth-order Steklov eigenvalue problem in a circular domain. By using the Fourier basis function expansion and variable separation technique, the original problem is transformed into a series of radial one-dimensional eigenvalue problems with boundary eigenvalue. Then we introduce essential polar conditions and establish the discrete variational form for each radial one-dimensional eigenvalue problem. Based on the minimax principle and the approximation property of the interpolation operator, we prove the error estimates of approximation eigenvalues. Finally, some numerical experiments are provided, and the numerical results show the efficiency of the proposed algorithm.

This article is organized as follows. In Section 2, a reduced scheme based on polar coordinate transformation is presented. In Section 3, the weighted space and discrete variational form are derived. In Section 4, the error estimation of approximation solutions is proved. In Section 5, we present the process of effective implementation of the algorithm. We present some numerical experiments in Section 6 to illustrate the accuracy and efficiency of our proposed algorithm. Finally, we give in Section 7 some concluding remarks.

## 2 Reduced scheme based on polar coordinate transformation

The fourth-order Steklov eigenvalue problems read:

(2.1) Δ 2 ϕ = 0 , in Ω ,

(2.2) ϕ = 0 , on Ω ,

(2.3) Δ ϕ = λ ϕ ν , on Ω ,

where Ω = { ( x , y ) R 2 : 0 < x 2 + y 2 < R } , ν is the unit outward normal to the boundary Ω . Let x = r cos θ , y = r sin θ , ψ ( r , θ ) = ϕ ( x , y ) . Then we derive that

(2.4) Δ ϕ ( x , y ) = L ψ ( r , θ ) = 1 r r r ψ ( r , θ ) r + 1 r 2 2 ψ ( r , θ ) θ 2 .

Then the equivalent form of (2.1)–(2.3) in polar coordinates is as follows:

(2.5) L 2 ψ ( r , θ ) = 0 , in D = ( 0 , R ) × [ 0 , 2 π ) ,

(2.6) ψ ( R , θ ) = 0 , θ [ 0 , 2 π ) ,

(2.7) L ψ ( R , θ ) = λ ψ r ( R , θ ) , θ [ 0 , 2 π ) .

Since ψ ( r , θ ) is 2 π periodic in θ , then we have

(2.8) ψ ( r , θ ) = m = 0 ψ m ( r ) e i m θ .

Substituting (2.8) into (2.4), we derive that

(2.9) L ψ ( r , θ ) = m = 0 1 r r r ψ m ( r ) r m 2 r ψ m ( r ) e i m θ .

Following the discussion in [14,15], to overcome the pole singularity introduced by polar coordinate transformation, we need to introduce the essential pole conditions, which make (2.9) meaningful, as follows:

(2.10) m 2 ψ m ( 0 ) = 0 , lim r 0 + r r ψ m ( r ) r m 2 r ψ m ( r ) = 0 .

Using the fact that r r ψ m ( r ) r = ψ m ( r ) r + r 2 ψ m ( r ) r 2 , (2.10) can be reduced to

(2.11) m 2 ψ m ( 0 ) = 0 , ( 1 m 2 ) ψ m ( 0 ) r = 0 .

From (2.11) we can further obtain that

(2.12) ( 1 ) ψ m ( 0 ) r = 0 , ( m = 0 ) ;

(2.13) ( 2 ) ψ m ( 0 ) = 0 , ( m = 1 ) ;

(2.14) ( 3 ) ψ m ( 0 ) = 0 , ψ m ( 0 ) r = 0 , ( m 2 ) .

Let r = t + 1 2 R , u m ( t ) = ψ m ( r ) , L m u m ( t ) = 1 t + 1 t t u m ( t ) t m 2 ( t + 1 ) 2 u m ( t ) . From (2.8), (2.11)–(2.14), and the orthogonal properties of Fourier basis functions, (2.5)–(2.7) are equivalent to a series of one-dimensional eigenvalue problems

(2.15) L m 2 u m ( t ) = 0 , t ( 1 , 1 ) ,

(2.16) ( 1 ) u m ( 1 ) t = 0 , u m ( 1 ) = 0 , L m u m ( 1 ) = R 2 λ m u m ( 1 ) t , ( m = 0 ) ;

(2.17) ( 2 ) u m ( 1 ) = 0 , u m ( 1 ) = 0 , L m u m ( 1 ) = R 2 λ m u m ( 1 ) t , ( m = 1 ) ;

(2.18) ( 3 ) u m ( ± 1 ) = 0 , u m ( 1 ) t = 0 , L m u m ( 1 ) = R 2 λ m u m ( 1 ) t , ( m 2 ) .

## 3 Weighted space and discrete variational form

Without losing generality, we only consider the case of m 0 . First, we divide the solution interval I = ( 1 , 1 ) as follows:

1 = t 0 < t 1 < < t i < < t n = 1 .

Define the usual weighted Sobolev space:

L ω 2 ( I ) ρ : I ω ρ 2 d t <

equipped with the following inner product and norm:

( ρ , v ) ω = I ω ρ v d t , ρ w = I ω ρ 2 d t 1 2 ,

where ω = 1 + t , t ( 1 , 1 ) . We further introduce the following weighted Sobolev space:

H 0 , ω , m 2 ( I ) u m : L m u m L ω 2 ( I ) , m 2 u m ( 1 ) = ( 1 m 2 ) u m ( 1 ) t = u m ( 1 ) = 0 ,

equipped with the inner product and norm:

( u m , v m ) 2 , ω , m = ( L m u m , L m v m ) ω , u m 2 , ω , m = ( u m , u m ) 2 , ω , m .

Then the variational form of (2.15)–(2.18) is: Find ( λ m , u m 0 ) R × H 0 , ω , m 2 ( I ) , such that

(3.1) A m ( u m , v m ) = λ m B m ( u m , v m ) , v m H 0 , ω , m 2 ( I ) ,

where

A m ( u m , v m ) = I ( t + 1 ) L m u m L m v m d t , B m ( u m , v m ) = R u m ( 1 ) t v m ( 1 ) t .

Let us denote by U h a piecewise cubic Hermite interpolation function space. Define the approximation space S h ( m ) = U h H 0 , ω , m 2 ( I ) . Then the discrete variational form associated with (3.1) is: Find ( λ m h , u m h 0 ) R × S h ( m ) , such that

(3.2) A m ( u m h , v m h ) = λ m h B m ( u m h , v m h ) , v m h S h ( m ) .

## 4 Error estimation of approximation solutions

For the sake of brevity, we shall use the expression a b which denotes a c b , where c is a positive constant.

## Lemma 1

For any u m , v m H 0 , ω , m 2 ( I ) , the following equalities hold:

(4.1) I ( t + 1 ) L m u m L m v m d t = I ( t + 1 ) u m v m d t + ( 2 m 2 + 1 ) I 1 t + 1 u m v m d t + m 2 ( m 2 4 ) I 1 ( t + 1 ) 3 u m v m d t + u m ( 1 ) v m ( 1 )

with m 1 , and

(4.2) I ( t + 1 ) L m u m L m v m d t = I ( t + 1 ) u m m t + 1 u m v m m t + 1 v m d t + ( 1 + m ) 2 I 1 t + 1 u m m t + 1 u m v m m t + 1 v m d t + ( 1 + m ) u m ( 1 ) v m ( 1 )

with m = 1 .

## Proof

Using integration by parts, pole conditions, and boundary conditions, we derive that

(4.3) I ( u m v m + u m v m ) d x = u m ( 1 ) v m ( 1 ) ,

(4.4) I 1 ( t + 1 ) 2 ( u m v m + u m v m ) d t = 2 I 1 ( t + 1 ) 3 u m v m d t ,

(4.5) I 1 t + 1 ( u m v m + u m v m ) d t = 2 I 1 ( t + 1 ) 3 u m v m d t 2 I 1 ( t + 1 ) u m v m d t .

Then when m 1 , we derive from (4.3)–(4.5) that

I ( t + 1 ) L m u m L m v m d t = I ( t + 1 ) u m + u m m 2 t + 1 u m v m + v m t + 1 m 2 ( t + 1 ) 2 v m d t = I ( t + 1 ) u m v m d t + I u m v m + u m v m d t m 2 I 1 ( t + 1 ) 2 ( u m v m + u m v m ) d t m 2 I 1 t + 1 ( u m v m + u m v m ) d t + m 4 I 1 ( t + 1 ) 3 u m v m d t + I 1 ( t + 1 ) u m v m d t = I ( t + 1 ) u m v m d t + ( 2 m 2 + 1 ) I 1 t + 1 u m v m d t + m 2 ( m 2 4 ) I 1 ( t + 1 ) 3 u m v m d t + u m ( 1 ) v m ( 1 ) .

When m = 1 , we have

I ( t + 1 ) L m u m L m v m d t = I ( t + 1 ) u m + u m t + 1 m 2 ( t + 1 ) 2 u m v m + v m t + 1 m 2 ( t + 1 ) 2 v m d t = I ( t + 1 ) u m m t + 1 u m v m m t + 1 v m d t + ( 1 + m ) 2 I 1 t + 1 u m m t + 1 u m × v m m t + 1 v m d t + ( 1 + m ) I u m m t + 1 u m v m m t + 1 v m d t = I ( t + 1 ) u m m t + 1 u m v m m t + 1 v m d t + ( 1 + m ) 2 I 1 t + 1 u m m t + 1 u m × v m m t + 1 v m d t + ( 1 + m ) u m ( 1 ) v m ( 1 ) .

## Theorem 1

A m ( u m , v m ) is a bounded and coercive bilinear functional on H 0 , ω , m 2 ( I ) × H 0 , ω , m 2 ( I ) , i.e.,

A m ( u m , v m ) u m 2 , ω , m v m 2 , ω , m ,

A m ( u m , u m ) u m 2 , ω , m 2 .

## Proof

From Cauchy-Schwarz inequality, we derive that

A m ( u m , v m ) = I ( t + 1 ) L m u m L m v m d t I ( t + 1 ) L m u m 2 d t 1 2 I ( t + 1 ) L m v m 2 d t 1 2 u m 2 , ω , m v 2 , ω , m , A m ( u m , u m ) = I ( t + 1 ) L m u m 2 d t u m 2 , ω , m 2 .

Lemma 3.2. Let λ m l be the lth eigenvalue of the variational form (3.1). Use V l to denote any l -dimensional subspace of H 0 , ω , m 2 ( I ) . For λ m 1 λ m 2 λ m l , it holds

(4.6) λ m l = min V l H 0 , ω , m 2 ( I ) max v m V l A m ( v m , v m ) B m ( v m , v m ) .

## Proof

See Theorem 3.1 in [17].□

Lemma 3.3. Let λ m i be the eigenvalue of the variational form (3.1) and be arranged in the ascending order. Define

W i , j = span { u m i , , u m j } ,

where u m i is the eigenfunction associated with λ m i . Then there hold

(4.7) λ m l = max v m W k , l A m ( v m , v m ) B m ( v m , v m ) k l ,

(4.8) λ m l = min v m W l , n A m ( v m , v m ) B m ( v m , v m ) l n .

## Proof

See Lemma 3.2 in [17].□

For the discrete form (3.2), the following minimax principle is also effective (see [17]).

Lemma 3.4. Let λ m h l be the eigenvalue of the discrete variational form (3.2). Use V l h to denote any l -dimensional subspace of S h ( m ) . For λ m h 1 λ m h 2 λ m h l , it holds

(4.9) λ m h l = min V h l S h ( m ) max v m V h l A m ( v m , v m ) B m ( v m , v m ) .

Define an orthogonal projection Q h 2 , m : H 0 , ω , m 2 ( I ) S h ( m ) by

A m ( u m Q h 2 , m u m , v m h ) = 0 , v m h S h ( m ) .

## Theorem 2

Let λ m h l be the approximation solution of λ m l . Then it holds

(4.10) 0 < λ m l λ m h l λ m l max v m W 1 , l B m ( v m , v m ) B m ( Q h 2 , m v m , Q h 2 , m v m ) .

## Proof

According to the positive definite property of A m ( u m , v m ) and B m ( u m , v m ) we derive that λ m l > 0 . Since S h ( m ) H 0 , ω , m 2 ( I ) , then from (4.6) and (4.9) we obtain λ m l λ m h l . Let Q h 2 , m W 1 , l be the space spanned by Q h 2 , m u m 1 , Q h 2 , m u m 2 , , Q h 2 , m u m l . From the statements of Lemma 4.1 in [17], we know that Q h 2 , m W 1 , l is an l -dimensional subspace of S h ( m ) . We derive from the minimax principle that

λ m h l max v m Q h 2 , m W 1 , l A m ( v m , v m ) B m ( v m , v m ) = max v m W 1 , l A m ( Q h 2 , m v m , Q h 2 , m v m ) B m ( Q h 2 , m v m , Q h 2 , m v m ) .

From the bilinear property of A m ( v m , v m ) , we have A m ( v m , v m ) = A m ( Q h 2 , m v m , Q h 2 , m v m ) + 2 A m ( v m Q h 2 , m v m , Q h 2 , m v m ) + A m ( v m Q h 2 , m v m , v m Q h 2 , m v m ) . Further from A m ( v m Q h 2 , m v m , Q h 2 , m v m ) = 0 and the non-negativity of A m ( v m Q h 2 , m v m , v m Q h 2 , m v m ) , we have

A m ( Q h 2 , m v m , Q h 2 , m v m ) A m ( v m , v m ) .

Thus, we obtain that

λ m h l max v m W 1 , l A m ( v m , v m ) B m ( Q h 2 , m v m , Q h 2 , m v m ) = max v m W 1 , l A m ( v m , v m ) B m ( v m , v m ) B m ( v m , v m ) B m ( Q h 2 , m v m , Q h 2 , m v m ) λ m l max v m W 1 , l B m ( v m , v m ) B m ( Q h 2 , m v m , Q h 2 , m v m ) .

The proof is complete.□

Define the interpolation operator I m h : H 0 , ω , m 2 ( I ) U h by

I m h u m ( t ) = H 3 , m , i ( t ) , t I i ,

where I i = [ t i 1 , t i ] , H 3 , m i ( t ) is a cubic Hermite interpolation polynomial of u m in I i . Let

u m i ( t ) = u m ( t ) , t I i .

From the remainder theorem of cubic Hermite interpolation, we have

u m i ( t ) H 3 , m , i ( t ) = ( u m i ) ( 4 ) ( ξ m i ( t ) ) 4 ! ( t t i 1 ) 2 ( t t i ) 2 ,

where ξ m i ( t ) I i is a function depending on t .

## Theorem 3

Let E m i ( t ) = ( u m i ) ( 4 ) ( ξ m i ( t ) ) 4 ! , u m H 0 , ω , m 2 ( I ) . Assume that u m is sufficiently smooth such that t k E m i ( t ) M ( k = 0 , 1 , 2 ) , where M is a constant greater than zero. Then the following inequality holds:

(4.11) t 2 ( I m h u m u m ) h 2 ,

where h = max 1 i n { h i } , h i = t i t i 1 , u m = I u m 2 d t 1 2 .

## Proof

Since

u m i ( t ) H 3 , m , i ( t ) = E m i ( t ) ( t t i 1 ) 2 ( t t i ) 2 ,

then we have

t 2 ( u m i ( t ) H 3 , m , i ( t ) ) = t 2 E m i ( t ) ( t t i 1 ) 2 ( t t i ) 2 + 2 t E m i ( t ) t [ ( t t i 1 ) 2 ( t t i ) 2 ] + E m i ( t ) t 2 [ ( t t i 1 ) 2 ( t t i ) 2 ] .

Thus, we obtain

t 2 ( u m i ( t ) H 3 , m , i ( t ) ) 2 [ ( t t i 1 ) 2 ( t t i ) 2 ] 2 + [ t ( ( t t i 1 ) 2 ( t t i ) 2 ) ] 2 + { t 2 [ ( t t i 1 ) 2 ( t t i ) 2 ] } 2 = [ ( t t i 1 ) ( t t i ) ] 4 + 4 [ ( t t i 1 ) ( t t i ) ( 2 t t i 1 t i ) ] 2 + 4 [ 4 ( t t i 1 ) ( t t i ) + ( t t i 1 ) 2 + ( t t i ) 2 ] 2 h i 2 8 + 4 h i h i 2 2 2 + 8 h i 2 2 + h i 2 2 h i 4 .

Thus,

t 2 ( I m h u m u m ) 2 = i = 1 n I i [ t 2 ( u m i ( t ) H 3 , m , i ( t ) ) ] 2 d t i = 1 n h i 5 h 4 .

Furthermore, we have

t 2 ( I m h u m u m ) h 2 .

The proof is complete.□

## Theorem 4

Let λ m h l be the approximate eigenvalue of λ m l . Assume that u m H 0 , ω , m 2 ( I ) and satisfies the condition of Theorem 3, then the following inequality holds:

λ m h l λ m l h 4 ,

where c ( l ) is a constant independent of h.

## Proof

For brief, we only give the proof for the case of m 1 , and it can be similarly proven for the case of m = 1 . For q W 1 , l , we have q = i = 1 l q i u m i . By using the orthogonality of the characteristics function u m i and B m ( u m i , u m i ) = 1 , we have

B m ( q , q ) B m ( Q h 2 , m q , Q h 2 , m q ) B m ( q , q ) 2 B m ( q , q Q h 2 , m q ) B m ( q , q ) 2 i , j = 1 l q i q j B m ( u m i Q h 2 , m u m i , u m j ) i = 1 l q i 2 2 l max i , j = 1 , , l B m ( u m i Q h 2 , m u m i , u m j ) .

From Cauchy-Schwarz inequality we have

B m ( u m i Q h 2 , m u m i , u m j ) = 1 λ m j λ m j B m ( u m j , u m i Q h 2 , m u m i ) = 1 λ m j A m ( u m j , u m i Q h 2 , m u m i ) = 1 λ m j A m ( u m j Q h 2 , m u m j , u m i Q h 2 , m u m i ) 1 λ m j [ A m ( u m j Q h 2 , m u m j , u m j Q h 2 , m u m j ) ] 1 2 [ A m ( u m i Q h 2 , m u m i , u m i Q h 2 , m u m i ) ] 1 2 .

When m 2 , from Hardy inequality (cf. B8.6 in [16]) we derive

I 1 ( t + 1 ) 3 ( u m i ) 2 d t I 1 t + 1 ( t u m i ) 2 d t ,

I 1 ( t + 1 ) 2 ( t u m i ) 2 d t I ( t 2 u m i ) 2 d t .

Since

[ t u m i ( 1 ) ] 2 = 1 4 I t ( ( t + 1 ) t u m i ) d t 2 = 1 4 I t u m i + ( t + 1 ) t 2 u m i d t 2 I ( t u m i ) 2 d t + I ( t + 1 ) ( t 2 u m i ) 2 d t I 1 t + 1 ( t u m i ) 2 d t + I ( t + 1 ) ( t 2 u m i ) 2 d t ,

from Lemma 1 we have

u m i 2 , ω , m 2 I ( t + 1 ) ( t 2 u m i ) 2 d t + I 1 t + 1 ( t u m i ) 2 d t + I 1 ( t + 1 ) 3 ( u m i ) 2 d t I ( t 2 u m i ) 2 d t + I 1 t + 1 ( t u m i ) 2 d t I ( t 2 u m i ) 2 d t + I 1 ( t + 1 ) 2 ( t u m i ) 2 d t I ( t 2 u m i ) 2 d t .

Then we derive that

B m ( u m i Q h 2 , m u m i , u m j ) = 1 λ m j λ m j B m ( u m j , u m i Q h 2 , m u m i ) 1 λ m j [ A m ( u m j Q h 2 , m u m j , u m j Q h 2 , m u m j ) ] 1 2 [ A m ( u m i Q h 2 , m u m i , u m i Q h 2 , m u m i ) ] 1 2 1 λ m j [ A m ( u m j I m h u m j , u m j I m h u m j ) ] 1 2 [ A m ( u m i I m h u m i , u m i I m h u m i ) ] 1 2 M λ m j u m j I h u m j 2 , ω , m u m i I h u m i 2 , ω , m I [ t 2 ( u m j I m h u m j ) ] 2 d t 1 2 I [ t 2 ( u m i I m h u m i ) ] 2 d t 1 2 t 2 ( u m j I m h u m j ) t 2 ( u m i I m h u m i ) .

Similarly, when m = 0 , we derive that

B m ( u m i Q h 2 , m u m i , u m j ) t 2 ( u m j I m h u m j ) t 2 ( u m i I m h u m i ) .

Since

B m ( q , q ) B m ( Q h 2 , m q , Q h 2 , m q ) 1 1 2 l max i , j = 1 , , l B m ( u m i Q h 2 , m u m i , u m j ) ,

we obtain from Theorems 2 and 3 the desired results.□

## 5 Efficient implementation of the algorithm

In order to efficiently solve the problems (3.2), we start by constructing a set of basis functions which satisfy boundary conditions. Let

φ 0 0 ( t ) = 2 t t 0 h 1 + 1 t t 0 h 1 1 2 , t i t t i + 1 , 0 , others , φ 0 1 ( t ) = h 1 2 ( t t 0 ) ( t t 1 ) 2 , t 0 t t 1 0 , others , φ i 0 ( t ) = 1 + t t i h i 2 1 2 t t i h i , t i 1 t t i , 2 t t i h i + 1 + 1 t t i h i + 1 1 2 , t i t t i + 1 , 0 , others , φ i 1 ( t ) = h i 2 ( t t i ) ( t t i 1 ) 2 , t i 1 t t i , h i + 1 2 ( t t i ) ( t t i + 1 ) 2 , t i t t i + 1 , 0 , others , φ n 1 ( t ) = h n 2 ( t t n ) ( t t n 1 ) 2 , t n 1 t t n , 0 , others ,

where i = 1 , , n 1 . It is clear that

S h ( 0 ) = span { φ 0 0 ( t ) , , φ n 1 0 ( t ) , φ 1 1 ( t ) , , φ n 1 ( t ) } ; S h ( 1 ) = span { φ 1 0 ( t ) , , φ n 1 0 ( t ) , φ 0 1 ( x ) , , φ n 1 ( t ) } ; S h ( m ) = span { φ 1 0 ( t ) , , φ n 1 0 ( t ) , φ 1 1 ( x ) , , φ n 1 ( t ) } , ( m 2 ) .

Denote

a i j p q = I ( t + 1 ) ( φ j p ) ( φ i q ) d t , b i j p q = I 1 t + 1 ( φ j p ) ( φ i q ) d t , c i j p q = I 1 ( t + 1 ) 3 φ j p φ i q d t , d i j p q = φ j p ( 1 ) φ i q ( 1 ) ,

where p , q = 0 , 1 .

Next, we will derive the matrix form of the discrete variational scheme (3.2).

Case 1. When m = 0 , let

(5.1) u 0 h = u 0 0 φ 0 0 + u n 1 φ n 1 + i = 1 n 1 ( u i 0 φ i 0 + u i 1 φ i 1 ) .

Plugging the expression (5.1) in (3.2) and taking v 0 h through all the basis functions in S h ( 0 ) , we derive that

(5.2) ( A 0 + B 0 + D 0 ) U 0 = λ 0 h R D 0 U 0 ,

where

A 0 = ( a i j 00 ) ( a i j 10 ) ( a i j 01 ) ( a i j 11 ) , B 0 = ( b i j 00 ) ( b i j 10 ) ( b i j 01 ) ( b i j 11 ) , D 0 = ( d i j 00 ) ( d i j 10 ) ( d i j 01 ) ( d i j 11 ) ,

U 0 = ( u 0 0 , , u N 1 0 , u 1 1 , , u n 1 ) T .

Similarly, when m = 1 , let

(5.3) u 1 h = u 0 1 φ 0 1 + u n 1 φ n 1 + i = 1 n 1 ( u i 0 φ i 0 + u i 1 φ i 1 ) .

Plugging the expression (5.3) in (3.2) and taking v 1 h through all the basis functions in S h ( 1 ) , we obtain

(5.4) [ A 1 + ( 1 + m ) D 1 ] U 1 = λ 1 h R D 1 U 1 ,

where

A 1 = ( a ˜ i j 00 ) ( a ˜ i j