Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter January 31, 2022

Estimating SPARMA Models with Dependent Error Terms

  • Yacouba Boubacar Maïnassara ORCID logo EMAIL logo and Abdoulkarim Ilmi Amir

Abstract

We are interested in a class of seasonal autoregressive moving average (SARMA) models with periodically varying parameters, so-called seasonal periodic autoregressive moving average (SPARMA) models under the assumption that the errors are uncorrelated but non-independent (i.e. weak SPARMA models). Relaxing the classical independence assumption on the errors considerably extends the range of application of the SPARMA models, and allows one to cover linear representations of general nonlinear processes. We establish the asymptotic properties of the quasi-generalized least squares (QLS) estimator of these models. Particular attention is given to the estimation of the asymptotic variance matrix of the QLS estimator, which may be very different from that obtained in the standard framework. A set of Monte Carlo experiments are presented.

AMS 2000 Subject Classifications: Primary 62M10; 62F03; 62F05; secondary 91B84; 62P05
JEL Classification: C02; C13; C22

Corresponding author: Yacouba Boubacar Maïnassara, Laboratoire de mathématiques de Besançon, Université Bourgogne Franche-Comté, UMR CNRS 6623, 16 route de Gray, 25030, Besançon, France, E-mail:

Article note: This article is linked with another article which can be found here: https://doi.org/10.1515/jtse-2022-0002.


Acknowledgments

We sincerely thank the anonymous referee and Editor for helpful remarks.

Appendix A: Proofs

The proofs of Theorem 1 and 2 are quite technical. These are adaptations of the arguments used in Francq, Roy, and Saidi (2011). In order to shorten our paper and to point out the novelties of our proofs, only the new arguments will be developed. Let K and ρ be generic constants, whose values will be modified along the proofs, such that K > 0 and ρ 0,1 .

A.1 Reminder on Technical Issues on WLS Method for SPARMA Models

We recall that, given a realization X 1, …, X NT , the periodic noise ϵ nT+ν (α) is approximated by e nT+ν (α) which is defined in (8).

The starting point in the asymptotic analysis, is the property that ϵ nT+ν (α) − e nT+ν (α) converges uniformly to 0 (almost-surely) as n goes to infinity. Similar properties also holds for the derivatives with respect to α of ϵ nT+ν (α) − e nT+ν (α). We sum up the fact that we shall need in the sequel. We refer to the appendix of Francq, Roy, and Saidi (2011) for a more detailed treatment.

Under Assumption (A1), for any α ∈ Δ δ and any ( l , m ) { 1 , , k 0 T } 2 , there exists absolutely summable and deterministic sequences ( C i ( θ ) ) i 0 , ( C i , l ( θ ) ) i 0 and ( C i , l , m ( θ ) ) i 0 such that, almost surely,

(22) ϵ n ( α ) = i = 0 C i ( α ) X n i , ϵ n ( α ) α m = i = 0 C m , i ( α ) X n i and 2 ϵ n ( α ) α m α l = i = 0 C m , l , i ( α ) X n i

(23) e n ( α ) = i = 0 N 2 C i ( α ) X n i , e n ( α ) α m = i = 0 N 2 C m , i ( α ) X n i and 2 e n ( α ) α m α l = i = 0 N 2 C m , l , i ( α ) X n i .

A useful property of the matrices sequences C, is that they are asymptotically exponentially small. Indeed, there exists ρ 0,1 such that, for all i ≥ 0,

(24) sup α Δ δ C i ( α ) + C m , i ( α ) + C m , l , i ( α ) K ρ i ,

where K is a positive constant. Finally, from the above estimates, we are able to deduce that for any ( l , m ) { 1 , , k 0 T } 2 , it holds

(25) sup α Δ δ ϵ n T + ν ( α ) e n T + ν ( α ) K ρ n ,

(26) ρ n sup α Δ δ e n T + ν ( α ) = ρ n sup α Δ δ ϵ n T + ν ( α ) n a.s. 0 ,

(27) sup σ 2 > σ ̲ sup α Δ δ O N σ 2 ( α ) Q N σ 2 ( α ) = O ( 1 / N )  a.s.  N ,

for any constant σ ̲ > 0 . Analogous estimates to (25), (26) and (27) are satisfied for first and second order derivatives of ϵ nT+ν (α) and e nT+ν (α).

This implies that the sequences N Q N σ 2 ( α 0 ) / α and N O N σ 2 ( α 0 ) / α have the same asymptotic distribution. More precisely we have

(28) N α Q N σ 2 ( α 0 ) α O N σ 2 ( α 0 ) = o P ( 1 ) .

We also have

N O N σ ̂ 2 ( α ) α O N σ 0 2 ( α ) α α = α 0 = ν = 1 T 1 σ ̂ ν 2 1 σ ν 2 1 N n = 0 N 1 ϵ n T + ν ( α 0 ) ϵ n T + ν α α = α 0 = o P ( 1 ) ,

by using the fact that

(29) max ν = 1 , , T 1 σ ̂ ν 2 1 σ ν 2 < ϵ  almost surely  N  large enough ,

for any ɛ > 0 and N large enough. And that N 1 n = 0 N 1 ϵ n T + ν ( α 0 ) ϵ n T + ν / α α = α 0 N is a tight sequence.

A.2 Proof of Theorem 3

The proof of Theorem 3 is based on series lemmas. We use the multiplicative matrix norm defined by: A = sup x 1 A x = ϱ 1 2 ( A A ) , where A is a d 1 × d 2 matrix, ∥x∥ is the Euclidean norm of the vector x R d 2 , and ϱ(.) denotes the spectral radius. This norm satisfies

(30) A 2 = i = 1 k 1 j = 1 k 2 a i , j 2

with a i,j the entries of A R k 1 × k 2 . The choice of the norm is crucial for the following results to hold(with e.g. the Euclidean, this result is not valid). We denote

Σ ϒ , ϒ ̲ r = E ϒ n ϒ ̲ r , n Σ ϒ = E ϒ n ϒ n Σ ϒ ̲ r = E ϒ ̲ r , n ϒ ̲ r , n

where ϒ ̲ r , n = ( ϒ n 1 , , ϒ n r ) . For any N ≥ 0 we have

I ̂ N SP = Φ ̂ r 1 ( 1 ) Σ ̂ u ̂ r Φ ̂ r 1 ( 1 ) = ( Φ ̂ r 1 ( 1 ) Φ 1 ( 1 ) ) Σ ̂ u ̂ r Φ ̂ r 1 ( 1 ) + Φ 1 ( 1 ) ( Σ ̂ u ̂ r Σ u ) Φ ̂ r 1 ( 1 ) + Φ 1 ( 1 ) Σ u ( Φ ̂ r 1 ( 1 ) Φ 1 ( 1 ) ) + ϕ 1 ( 1 ) Σ u Φ 1 ( 1 ) .

We the obtain

(31) I ̂ N S P I ( α 0 ) Φ ̂ r 1 ( 1 ) ϕ 1 ( 1 ) Σ ̂ u ̂ r Φ ̂ r 1 ( 1 ) + Φ 1 ( 1 ) × Σ ̂ u ̂ r Σ u Φ ̂ r 1 ( 1 ) + Φ Σ u Φ ̂ r 1 ( 1 ) Φ 1 ( 1 ) Φ ̂ 1 ( 1 ) r ϕ 1 ( 1 ) Σ ̂ u ̂ r Φ ̂ r 1 ( 1 ) + Φ 1 ( 1 ) Σ u + Σ ̂ u ̂ r Σ u Φ ̂ r 1 ( 1 ) Φ 1 Φ ̂ r 1 ( 1 ) Φ Φ ̂ r ( 1 ) Φ 1 Σ ̂ u ̂ r Φ ̂ r 1 ( 1 ) + Φ 1 Σ u Σ ̂ u ̂ r Σ u Φ ̂ r 1 ( 1 ) Φ 1 .

It suffices to show that Φ ̂ r ( 1 ) Φ and Σ ̂ u ̂ r Σ u in probability, to have the proof of Theorem 3. Let the r × 1 vector 1 r = (1, …, 1)′ and the rT(p + q + P + Q) × T(p + q + P + Q) matrix E r = I T ( p + q + P + Q ) 1 r , where ⊗ denotes the matrix kronecker product and I m the mm identity matrix. Write Φ ̲ * = ( ϕ 1 , , ϕ r ) where the ϕ i ’s are defined by (15). We have

(32) Φ ̂ r ( 1 ) Φ ( 1 ) = k = 1 r Φ ̂ r , k k = 1 r Φ r , k + k = 1 r Φ r , k k = 1 Φ r , k k = 1 r ( Φ ̂ r ( 1 ) Φ r , k ) + k = 1 r ( Φ r , k Φ k ) + k = r + 1 Φ k ( Φ ̂ r ̲ Φ ̲ r ) E r + Φ ̲ r * ϕ ̲ r E r + k = r + 1 Φ k r Φ ̂ ̲ r Φ ̲ r + Φ ̲ r * Φ ̲ r + k = r + 1 Φ k .

In view of Assumptions of Theorem 3 we have

k = r + 1 Φ k k = r + 1 Φ k N 0 .

To obtain the convergence in probability of Φ ̂ r ( 1 ) towards Φ(1), one must show the convergence in probability towards 0 of r Φ ̂ ̲ r Φ ̲ r and r Φ ̲ r * Φ ̲ r .

By (17)) we obtain

(33) ϒ n ( α 0 ) = Φ ̲ r ϒ ̲ r , n ( α 0 ) + u r , n ,

and Σ u r = Var ( u r , n ) = E u r , n ( ϒ n ( α 0 ) Φ ̲ r ϒ ̲ r , n ( α 0 ) ) . The vector u r,n is orthogonal to ϒ ̲ r , n ( α 0 ) . Therefore

Var ( u r , n ) = E ( ϒ n ( α 0 ) Φ ̲ r ϒ ̲ r , n ( α 0 ) ) ϒ n ( α 0 ) = Σ ϒ Φ ̲ r Σ ϒ , ϒ r .

Consequently the least squares estimator of Σ u r can be rewritten in the form:

(34) Σ ̂ u ̂ r = Σ ̂ ϒ ̂ Φ ̂ ̲ r Σ ̂ ϒ ̂ , ϒ ̂ ̲ r ,

where

(35) Σ ̂ ϒ ̂ = 1 N n = 0 N 1 ϒ ̂ n ϒ ̂ n .

Similar arguments combined with (15) yied

Σ u = E u n u n = E u n ϒ n ( α 0 ) = E ϒ n ( α 0 ) ϒ n ( α 0 ) k = 1 r Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ t ( α 0 ) = Σ ϒ Φ ̲ r * Σ ϒ , ϒ ̲ r k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) .

By (34) we obtain

(36) Σ ̂ u ̂ r Σ u = Σ ̂ w ̂ Φ ̂ ̲ r Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ + Φ ̲ r * Σ ϒ , ϒ ̲ r + k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) = Σ ̂ ϒ ̂ Σ ϒ Φ ̂ r ̲ Φ ̲ r * Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Φ ̲ r * Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ , ϒ ̲ r + k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) Σ ̂ ϒ ̂ Σ ϒ + Φ ̂ r ̲ Φ ̲ r * Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ , ϒ ̲ r + Φ ̂ r ̲ Φ ̲ r * Σ ̂ ϒ ̂ , ϒ ̂ ̲ r + Φ ̲ r * Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ , ϒ ̲ r + k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) .

In view of Assumptions of Theorem 3 and (30) we obtain

k = r + 1 Φ k E ϒ n k ( α 0 ) ϒ n ( α 0 ) Φ k k = r + 1 E ϒ n k ( α 0 ) ϒ n ( α 0 ) K k = r + 1 1 k 2 N 0

and we also have

Φ ̲ r * 2 k Tr Φ k Φ k < .

Lemma 1

Under the assumptions of the Theorem 3 we have

sup r 1 Σ ϒ , ϒ ̲ r , Σ ϒ , ϒ ̲ r , Σ , ϒ ̲ r < .

Proof

See Lemma 1 in the supplementary material of Boubacar Mainassara, Carbon, and Francq (2012). □

Lemma 2

Under the assumptions of the Theorem 3 there exists a finite positive constant K such that, for 1 ≤ r 1, r 2r and 1 ≤ m 1, m 2T(p + q + P + Q) we have

sup n Z h = | cov ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) , ϒ n r 1 h , m 1 ( α 0 ) w n r 2 h , m 2 ( α 0 ) | < K .

Proof

We can take the supremum over the integers n > 0, and write the proof in the case m 1 = m 2 = m. We have

h = | cov ( ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) , ϒ n r 1 h , m 1 ( α 0 ) ϒ n r 2 h , m 2 ( α 0 ) ) | K 8 h 1 , , h 8 = 0 cov ( Y 1 , h 1 , h 2 Y 1 + s , h 3 , h 4 , Y 1 + h , h 5 , h 6 Y 1 + s + h , h 7 , h 8 )

where

Y t , h 1 , h 2 = ϵ t h 1 ϵ t h 2 m E ϵ t h 1 ϵ t h 2 m

A slight extension of (Francq and Zakoïan 2019, Corollary A.3) concludes. □

Lemma 3

Under the assumptions of the Theorem 3, r Σ ̂ ϒ ̲ r Σ ϒ ̲ r , r Σ ̂ ϒ , ϒ ̲ r Σ ϒ , ϒ ̲ r and r Σ ̂ ϒ Σ ϒ tend to 0 in probability as N → ∞ when r = o ( N 1 3 ) .

Proof

For 1 ≤ m 1, m 2T(p + q + P + Q) and 1 ≤ r 1, r 2r, the ( { ( r 1 1 ) ( T ( p + q + P + Q ) + m 1 } , ( r 2 1 ) ( T ( p + q + P + Q ) + m 2 ) the element of Σ ̂ ϒ ̲ r is given by:

1 N n = 0 N 1 ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) .

For all β > 0 we use (30) and we obtain

P r Σ ̂ ϒ ̲ r Σ ϒ ̲ r β r β 2 E Σ ̂ ϒ ̲ r Σ ϒ ̲ r 2 r β 2 E 1 N n = 0 N 1 ϒ ̲ r , n ϒ ̲ r , n E ϒ ̲ r , n ϒ ̲ r , n 2 r β 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) E × 1 N n = 0 N 1 ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) E ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) 2 .

The stationarity of the process ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) n and Lemma 2 imply

P r Σ ̂ ϒ ̲ r Σ ϒ ̲ r β r β 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) Var × 1 N n = 0 N 1 ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) r N β 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) n = 0 N 1 n = 0 N 1 × cov ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) , ϒ n r 1 , m 1 ( α 0 ) × ϒ n r 2 , m 2 ( α 0 ) r N β 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) h = 2 N N 2 × ( N | h | ) cov ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) , ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) r N β 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) sup n Z × h = | c o v ϒ n r 1 , m 1 ( α 0 ) ϒ n r 2 , m 2 ( α 0 ) , ϒ n r 1 h , m 1 ( α 0 ) ϒ n r 2 h , m 2 ( α 0 ) | K ( T ( p + q + P + Q ) ) 2 r 3 N β 2 .

Finaly we have

E r Σ ̂ ϒ Σ ϒ 2 E r Σ ̂ ϒ , ϒ ̲ r Σ ϒ , ϒ ̲ r 2 E r Σ ̂ ϒ ̲ r Σ ϒ ̲ r 2 K ( T ( p + q + P + Q ) ) 2 r 3 N N 0 ,

when r = o ( N 1 3 ) . The result follows. □

We show in the following lemma that the previous lemma remains valid when we replace ϒ n (α 0) by ϒ ̂ n .

Lemma 4

Under the assumptions of the Theorem 3, r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r , r Σ ̂ ϒ ̂ , ϒ ̲ r Σ ϒ , ϒ ̲ r and r Σ ̂ ϒ ̂ Σ ϒ tend to 0 in probability as N → ∞ when r = o ( N 1 3 ) .

Proof

We denote Σ ̂ ϒ ̲ r , N the matrix obtained by replacing e n ( α ̂ ) by ϵ n ( α ̂ ) in Σ ̂ ϒ ̂ ̲ r . We have

r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N + r Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r + r Σ ̂ ϒ ̲ r Σ ϒ ̲ r

By Lemma 3 the term r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r converges in probability. The lemma will be proved as soon as we show that

(37) r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N = o P ( 1 )

(38)  and  r Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r = o P ( 1 )

when r = o ( N 1 3 ) . This is done in two separate steps.

Step 1: proof of (37) .

For all β > 0 we have

P r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N β r β E Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N r β E 1 N n = 0 N 1 ϒ ̂ ̲ r , n ϒ ̂ ̲ r , n 1 N n = 0 N 1 ϒ ̲ r , n ( N ) ϒ ̲ r , n ( N ) K r β r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) E × 1 N n = 0 N 1 ϒ ̂ ̲ n r 1 , m 1 ϒ ̂ ̲ n r 2 , m 2 1 N n = 0 N 1 ϒ ̲ n r 1 , m 1 ( N ) ϒ ̲ n r 2 , m 2 ( N ) ,

where

ϒ ̲ n , m ( N ) = ν = 1 T σ ν 2 ϵ n T + ν ( α ̂ ) α m ϵ n T + ν ( α ̂ ) and ϒ ̲ r , n ( N ) = ϒ n 1 ( N ) , , ϒ n r ( N ) .

It is follow that

(39) P r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N β K r N β ν = 1 T 1 σ ̂ ν 2 1 σ ν 2 r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) × m 2 = 1 T ( p + q + P + Q ) E n = 0 N 1 A ̂ n , r 1 A ̂ n , r 2 A n , r 1 A n , r 2

where

A n , r i = ϵ n T + ν r i ( α ̂ ) α m i ϵ n T + ν r i ( α ̂ ) , for  i = { 1,2 } .

Now notice that

e n T + ν r 1 ( α ̂ ) α m 1 e n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) α m 2 e n T + ν r 2 ( α ̂ ) ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) = e n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α ̂ ) α m 1 e n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) α m 2 × e n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α ̂ ) α m 1 e n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) α m 2 e n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) × α m 2 e n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 × α m 2 e n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) .

We replace the above identity in (39) and we obtain by Hölder’s inequality that

(40) P r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N β K r N β ν = 1 T 1 σ ̂ ν 1 σ ν r 1 = 1 r r 2 = 1 r m 1 = 1 T ( p + q + P + Q ) × m 2 = 1 T ( p + q + P + Q ) E D N , 1 + D N , 2 + D N , 3 + D N , 4

where

D N , 1 = n = 0 N 1 e n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α ̂ ) α m 1 e n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) × α m 2 e n T + ν r 2 ( α ̂ ) D N , 2 = n = 0 N 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 e n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) × e n T + ν r 2 ( α ̂ ) α m 2 e n T + ν r 2 ( α ̂ ) D N , 3 = n = 0 N 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) e n T + ν r 2 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) × α m 2 e n T + ν r 2 ( α ̂ ) D N , 4 = n = 0 N 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 × α m 2 e n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) .

We have

| D N , 1 | K n = 0 N 1 | e n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α ̂ ) | K r + n = r 1 + 1 N 1 ρ n .

Then we obtain

(41) D N , 1 K r .

The same calculations hold for the terms D N,2, D N,3, and D N,4. Thus

(42) | D N , 1 | + | D N , 2 | + | D N , 3 | + | D N , 4 | K r

and reporting this estimation in (40) and using (29), we have

P r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N β ϵ K r N β r T 2 ( p + q + P + Q ) 2 K r 5 2 N .

Then the sequence r Σ ̂ ϒ ̂ ̲ r Σ ̂ ϒ ̲ r , N converge in probability to 0 as N → ∞ when r = r ( N ) = o ( N 5 2 ) .

Step 2: proof of (38) .

First we follow the same approach than in the previous step. We have

Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r 2 = 1 N n = 0 N 1 ϒ ̲ r , n ( n ) ϒ ̲ r , n ( n ) ϒ ̲ r , n ϒ ̲ r , n 2 K r β r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) × 1 N n = 0 N 1 ϒ n r 1 , m 1 ( n ) ϒ n r 2 , m 2 ( n ) ϒ n r 1 , r 1 ϒ n r 2 , m 2 2 K r β r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) m 2 = 1 T ( p + q + P + Q ) ν = 1 T 1 σ ̂ ν 2 1 σ ν 2 2 × 1 N n = 0 N 1 A n , r 1 , m 1 ( α ̂ ) A n , r 1 A n , r 2 , ( α ̂ ) A n , r 1 ( α 0 ) A n , r 2 ( α 0 ) 2 .

Since

ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α 0 ) ϵ n T + ν r 2 ( α 0 ) α m 2 ϵ n T + ν r 2 ( α 0 ) = ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) × α m 2 ϵ n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α 0 ) ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α 0 ) ϵ n T + ν r 2 ( α ̂ ) ϵ n T + ν r 2 ( α 0 ) × α m 2 ϵ n T + ν r 2 ( α ̂ ) + ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α 0 ) ϵ n T + ν r 2 × ( α 0 ) α m 2 ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α 0 )

one has

(43) Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r ϵ K r β r 1 = 0 r r 2 = 0 r m 1 = 1 T ( p + q + P + Q ) E N , 1 + E N , 2 + E N , 3 + E N , 4

where

E N , 1 = 1 N n = 0 N 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α ̂ ) × ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) E N , 2 = 1 N n = 0 N 1 ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α 0 ) × ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) E N , 3 = 1 N n = 0 N 1 ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α 0 ) × ϵ n T + ν r 2 ( α ̂ ) ϵ n T + ν r 2 ( α 0 ) α m 2 ϵ n T + ν r 2 ( α ̂ ) E N , 4 = 1 N n = 0 N 1 ϵ n T + ν r 1 ( α 0 ) α m 1 ϵ n T + ν r 1 ( α 0 ) ϵ n T + ν r 2 ( α 0 ) × α m 2 ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α 0 ) .

Taylor expansions around α 0 yield that there exists α ̲ and α ̄ between α ̂ and α 0 such that

ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 1 ( α 0 ) r n T + ν α ̂ α 0

and

α m 1 ϵ n T + ν r 1 ( α ̂ ) α m 1 ϵ n T + ν r 1 ( α 0 ) s n T + ν α ̂ α 0

with r n T + ν = ϵ n T + ν ( α ̲ ) / α and s n T + ν = 2 ϵ n T + ν ( α ̄ ) / α α m . Using the fact that

E r n T + ν r 1 α m 1 ϵ n T + ν r 1 ( α ̂ ) ϵ n T + ν r 2 ( α ̂ ) α m 2 ϵ n T + ν r 2 ( α ̂ ) <

and that ( N ( α ̂ α 0 ) ) N is a tight sequence (which implies that α ̂ α 0 = O P ( 1 / N ) ), we deduce that

E N , 1 = O P 1 N .

The same arguments are valid for E N,2, E N,3 and E N,4. Consequently E N , 1 + E N , 2 + E N , 3 + E N , 4 = O P 1 / N and (43) yields

Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r 2 = O P r 2 N

when r = o ( N 1 3 ) we finaly obtain r Σ ̂ ϒ ̲ r , N Σ ̂ ϒ ̲ r = O P ( 1 ) . □

Lemma 5

Under the assumptions of the Theorem 3, we have

r Φ ̲ r * Φ ̲ r = O P ( 1 ) as r .

Proof

Racall that (15) and (33) we have

ϒ n ( α 0 ) = Φ ̲ r ϒ ̲ r , n + u r , n = Φ ̲ r * ϒ ̲ r , n + k = r + 1 Φ r ϒ n k ( α 0 ) + u n Φ ̲ r * ϒ ̲ r , n + u r , n * .

By the orthogonality conditions in (15) and (33), one has

Σ u r * , ϒ ̲ r E u r , n * ϒ ̲ r , n = E ϒ n ( α 0 ) Φ ̲ r * ϒ ̲ r , n ϒ ̲ r , n = E Φ ̲ r ϒ ̲ r , n + u r , n Φ ̲ r * ϒ ̲ r , n ϒ ̲ r , n = Φ ̲ r Φ ̲ r * Σ ϒ ̲ r

and consequently

(44) Φ ̲ r * Φ ̲ r = Σ u r * , ϒ ̲ r Σ ϒ ̲ r 1

Using Lemmas 1, 2, and (44) implies that

P r Φ ̲ r * Φ ̲ r β r β Σ u r * , ϒ ̲ r Σ ϒ ̲ r 1 K r β E r + 1 Φ r ϒ n k ( α 0 ) + u n ϒ ̲ r , n K r β k r + 1 Φ k E ϒ n k ( α 0 ) ϒ ̲ r , n K r β l 1 Φ l + r E ϒ n k ( α 0 ) ϒ n k ( α 0 ) , , ϒ n r K r β l 1 Φ l + r j = 1 T ( p + q + P + Q ) k = 1 T ( p + q + P + Q ) × r 1 r E ϒ n r l , j ( α 0 ) ϒ n r l , k ( α 0 ) 2 1 2 K r β l 1 Φ l + r j = 1 T ( p + q + P + Q ) k = 1 T ( p + q + P + Q ) × r 1 r E ϒ n r 1 l , j 2 ( α 0 ) ϒ n r 1 l , k 2 ( α 0 ) 1 2 K T ( p + q + P + Q ) r β l 1 Φ l + r .

By the assumptions of Theorem 3, l 1 Φ l + r = o ( 1 ) as r → ∞. The proof of the lemma follows. □

Lemma 6

Under the assumptions of the Theorem 3 we have

r Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 = o P ( 1 )

as N → ∞ when r = o ( N 1 3 ) and r → ∞.

Proof

We have

Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 + Σ ϒ ̲ r 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1 .

and by induction we obtain

Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 Σ ϒ ̲ r 1 k = 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r k Σ ϒ ̲ r 1 k .

We have

P r Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 > β P Σ ϒ ̲ r 1 k = 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r k Σ ϒ ̲ r 1 k > β P Σ ϒ ̲ r 1 k = 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r k Σ ϒ ̲ r 1 k > β a n d Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1 < 1 + P Σ ϒ ̲ r 1 k = 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r k Σ ϒ ̲ r 1 k > β a n d Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1 1 P r Σ ϒ ̲ r 1 2 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1 + P r Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1 1 P r Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r > β Σ ϒ ̲ r 1 2 + β r 1 2 Σ ϒ ̲ r 1 + P r Σ ϒ ̲ r Σ ̂ ϒ ̂ ̲ r Σ ϒ ̲ r 1

Lemmas 1 and 4 imply the result. □

Lemma 7

Under the assumptions of the Theorem 3, we have

r Φ ̂ ̲ r Φ ̲ r = O P ( 1 ) as r  and  r = o N 1 3 .

Proof

Lemmas 1 and 6 yield

(45) Σ ̂ ϒ ̂ ̲ r 1 Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 + Σ ϒ ̲ r 1 = O P ( 1 ) .

By (33), we have

0 = E u r , n ϒ ̲ r , n = E ϒ n ( α 0 ) Φ ̲ r ϒ ̲ r , n ϒ ̲ r , n = Σ ϒ , ϒ ̲ r Φ ̲ r Σ ϒ ̲ r

and so we have Φ ̲ r = Σ ϒ , w ̲ r Σ ϒ ̲ r 1 . Lemmas 1, 4, 6 and (45) imply

r Φ ̂ ̲ r Φ ̲ r = r Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ , ϒ ̲ r Σ ϒ ̲ r 1 = r Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ , ϒ ̲ r Σ ̂ ϒ ̂ ̲ r 1 + Σ ϒ , ϒ ̲ r Σ ̂ ϒ ̂ ̲ r 1 Σ ϒ ̲ r 1 = o P ( 1 ) .

The lemma is proved □

Proof of Theorem 3

Proof

Since by Lemma 4 we have Σ ̂ ϒ ̂ Σ ϒ = o P ( r 1 2 ) = o P ( 1 ) and Σ ̂ ϒ ̂ , ϒ ̂ ̲ r Σ ϒ , ϒ ̲ r = o P ( r 1 2 ) = o P ( 1 ) , and by Lemma 5: Φ ̂ ̲ r Φ ̲ r = o P ( r 1 2 ) = o P ( 1 ) . The Theorem 3 is then proved. □

Appendix B: Verification of Assumption (A1) on Example

Example 1

We now examine the causality and stationarity conditions on the following SPARMA(1, 1)(1, 1) model:

X n T + ν = a 1 ( ν ) X n T + ν 1 + a 1 ( ν ) X ( n 1 ) T + ν + a 1 ( ν ) a 1 ( ν ) X ( n 1 ) T + ν 1 b 1 ( ν ) ϵ n T + ν 1 b 1 ( ν ) ϵ ( n 1 ) T + ν + b 1 ( ν ) b 1 ( ν ) ϵ ( n 1 ) T + ν 1 + ϵ n T + ν .

This SPARMA(1, 1)(1, 1) model admits the following PARMA(1 + T, 1 + T) representation:

X n T + ν i = 1 1 + T a i * ( ν ) X n T + ν i = i = 1 1 + T b i * ( ν ) ϵ n T + ν i + ϵ n T + ν .

Finally we deduce the following VARMA(2, 2) model:

A 0 X n = A 1 X n 1 + A 1 X n 2 B 1 ϵ n 1 B 2 ϵ n 2 + B 0 ϵ n

where the matrices A 0, A 1, A 2, B 0, B 1, and B 2 are given for simplify for T = 4 by

A 0 = 1 0 0 0 a 1 * ( 2 ) 1 0 0 a 2 * ( 3 ) a 1 * ( 3 ) 1 0 a 3 * ( 4 ) a 2 * ( 4 ) a 1 * ( 4 ) 1 = 1 0 0 0 a 1 ( 2 ) 1 0 0 0 a 1 ( 3 ) 1 0 0 0 a 1 ( 4 ) 1

A 1 = a 4 * ( 1 ) a 3 * ( 1 ) a 2 * ( 1 ) a 1 * ( 1 ) a 5 * ( 2 ) a 4 * ( 2 ) a 3 * ( 2 ) a 2 * ( 2 ) a 6 * ( 3 ) a 5 * ( 3 ) a 4 * ( 3 ) a 3 * ( 3 ) a 7 * ( 4 ) a 6 * ( 4 ) a 5 * ( 4 ) a 4 * ( 4 ) = a 1 ( 1 ) 0 0 a 1 ( 1 ) a 1 ( 2 ) a 1 ( 2 ) a 1 ( 2 ) 0 0 0 a 1 ( 3 ) a 1 ( 3 ) a 1 ( 3 ) 0 0 0 a 1 ( 4 ) a 1 ( 4 ) a 1 ( 4 )

A 2 = a 8 * ( 1 ) a 7 * ( 1 ) a 6 * ( 1 ) a 5 * ( 1 ) a 9 * ( 2 ) a 8 * ( 2 ) a 7 * ( 2 ) a 6 * ( 2 ) a 10 * ( 3 ) a 9 * ( 3 ) a 8 * ( 3 ) a 7 * ( 3 ) a 11 * ( 4 ) a 10 * ( 4 ) a 9 * ( 4 ) a 8 * ( 4 ) = 0 0 0 a 1 ( 1 ) a 1 ( 1 ) 0 0 0 0 0 0 0 0 0 0 0 0

A 0 A 1 z A 2 z 2 = 1 a 1 ( 1 ) z 0 0 a 1 ( 1 ) a 1 ( 1 ) z 2 a 1 ( 1 ) z a 1 ( 2 ) a 1 ( 2 ) z a 1 ( 2 ) z 1 a 1 ( 2 ) z 0 0 0 a 1 ( 3 ) a 1 ( 3 ) z a 1 ( 3 ) z 1 a 1 ( 3 ) z 0 0 0 a 1 ( 4 ) a 1 ( 4 ) z a 1 ( 4 ) z 1 a 1 ( 4 ) z

we have det A 0 A 1 z A 2 z 2 = ν = 1 T 1 a 1 ( ν ) z 1 ν = 1 T a 1 ( ν ) z 0 and

B 0 = 1 0 0 0 b 1 * ( 2 ) 1 0 0 b 2 * ( 3 ) b 1 * ( 3 ) 1 0 b 3 * ( 4 ) b 2 * ( 4 ) b 1 * ( 4 ) 1 = 1 0 0 0 b 1 ( 2 ) 1 0 0 0 b 1 ( 3 ) 1 0 0 0 b 1 ( 4 ) 1

B 1 = b 4 * ( 1 ) b 3 * ( 1 ) b 2 * ( 1 ) b 1 * ( 1 ) b 5 * ( 2 ) b 4 * ( 2 ) b 3 * ( 2 ) b 2 * ( 2 ) b 6 * ( 3 ) b 5 * ( 3 ) b 4 * ( 3 ) b 3 * ( 3 ) b 7 * ( 4 ) b 6 * ( 4 ) b 5 * ( 4 ) b 4 * ( 4 ) = b 1 ( 1 ) 0 0 b 1 ( 1 ) b 1 ( 2 ) b 1 ( 2 ) b 1 ( 2 ) 0 0 0 b 1 ( 3 ) b 1 ( 3 ) b 1 ( 3 ) 0 0 0 b 1 ( 4 ) b 1 ( 4 ) b 1 ( 4 )

B 2 = b 8 * ( 1 ) b 7 * ( 1 ) b 6 * ( 1 ) b 5 * ( 1 ) b 9 * ( 2 ) b 8 * ( 2 ) b 7 * ( 2 ) b 6 * ( 2 ) b 10 * ( 3 ) b 9 * ( 3 ) b 8 * ( 3 ) b 7 * ( 3 ) b 11 * ( 4 ) b 10 * ( 4 ) b 9 * ( 4 ) b 8 * ( 4 ) = 0 0 0 b 1 ( 1 ) b 1 ( 1 ) 0 0 0 0 0 0 0 0 0 0 0 0 .

We have det B 0 B 1 z B 2 z 2 = ν = 1 T 1 b 1 ( ν ) z 1 ν = 1 T b 1 ( ν ) z 0 .

References

Aknouche, A., and A. Bibi. 2009. “Quasi-maximum Likelihood Estimation of Periodic Garch and Periodic Arma-Garch Processes.” Journal of Time Series Analysis 30 (1): 19–46. https://doi.org/10.1111/j.1467-9892.2008.00598.x.Search in Google Scholar

Akutowicz, E. J. 1958. On an Explicit Formula in Linear Least Squares Prediction, 261–6. Mathematica Scandinavica.10.7146/math.scand.a-10503Search in Google Scholar

Andrews, D. W. K. 1991. “Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation.” Econometrica 59 (3): 817–58. https://doi.org/10.2307/2938229.Search in Google Scholar

Basawa, I. V., and R. Lund. 2001. “Large Sample Properties of Parameter Estimates for Periodic ARMA Models.” Journal of Time Series Analysis 22 (6): 651–63. https://doi.org/10.1111/1467-9892.00246.Search in Google Scholar

Basawa, I. V., R. Lund, and Q. Shao. 2004. “First-order Seasonal Autoregressive Processes with Periodically Varying Parameters.” Statistics & Probability Letters 67 (4): 299–306. https://doi.org/10.1016/j.spl.2004.02.001.Search in Google Scholar

Battaglia, F., D. Cucina, and M. Rizzo. 2018. “A Generalization of Periodic Autoregressive Models for Seasonal Time Series.” In Technical Report, Tech. Rep., Vol 2. Department of Statistical Sciences, University La Sapienza.Search in Google Scholar

Battaglia, F., D. Cucina, and M. Rizzo. 2020. “Parsimonious Periodic Autoregressive Models for Time Series with Evolving Trend and Seasonality.” Statistics and Computing 30 (1): 77–91. https://doi.org/10.1007/s11222-019-09866-0.Search in Google Scholar

Berk, K. N. 1974. “Consistent Autoregressive Spectral Estimates.” Annals of Statistics 2: 489–502. https://doi.org/10.1214/aos/1176342709.Search in Google Scholar

Bollerslev, T., and E. Ghysels. 1996. “Periodic Autoregressive Conditional Heteroscedasticity.” Journal of Business & Economic Statistics 14 (2): 139–51. https://doi.org/10.1080/07350015.1996.10524640.Search in Google Scholar

Boubacar Mainassara, Y. 2011. “Multivariate Portmanteau Test for Structural VARMA Models with Uncorrelated but Non-independent Error Terms.” Journal of Statistical Planning and Inference 141 (8): 2961–75. https://doi.org/10.1016/j.jspi.2011.03.022.Search in Google Scholar

Boubacar Maïnassara, Y. 2012. “Selection of Weak VARMA Models by Modified Akaike’s Information Criteria.” Journal of Time Series Analysis 33 (1): 121–30. https://doi.org/10.1111/j.1467-9892.2011.00746.x.Search in Google Scholar

Boubacar Mainassara, Y., and C. Francq. 2011. “Estimating Structural VARMA Models with Uncorrelated but Non-independent Error Terms.” Journal of Multivariate Analysis 102 (3): 496–505. https://doi.org/10.1016/j.jmva.2010.10.009.Search in Google Scholar

Boubacar Mainassara, Y., M. Carbon, and C. Francq. 2012. “Computing and Estimating Information Matrices of Weak ARMA Models.” Computational Statistics & Data Analysis 56 (2): 345–61. https://doi.org/10.1016/j.csda.2011.07.006.Search in Google Scholar

Boubacar Maïnassara, Y., and C. C. Kokonendji. 2016. “Modified Schwarz and Hannan-Quinn Information Criteria for Weak VARMA Models.” Statistical Inference for Stochastic Processes 19 (2): 199–217. https://doi.org/10.1007/s11203-015-9123-z.Search in Google Scholar

Boubacar Maïnassara, Y., and B. Saussereau. 2018. “Diagnostic Checking in Multivariate ARMA Models with Dependent Errors Using Normalized Residual Autocorrelations.” Journal of the American Statistical Association 113 (524): 1813–27. https://doi.org/10.1080/01621459.2017.1380030.Search in Google Scholar

Brockwell, P. J., and R. A. Davis. 1991. Time Series: Theory and Methods In Springer Series in Statistics, 2nd ed. New York: Springer-Verlag.10.1007/978-1-4419-0320-4Search in Google Scholar

den Haan, W. J., and A. T. Levin. 1997. “A Practitioner’s Guide to Robust Covariance Matrix Estimation.” In Robust Inference, Volume 15 of Handbook of Statist., 299–342. Amsterdam: North-Holland.10.1016/S0169-7161(97)15014-3Search in Google Scholar

Dufour, J.-M., and D. Pelletier. 2021. “Practical Methods for Modeling Weak Varma Processes: Identification, Estimation and Specification with a Macroeconomic Application.” Journal of Business & Economic Statistics 0 (0): 1–13. https://doi.org/10.1080/07350015.2021.1904960.Search in Google Scholar

Francq, C., R. Roy, and A. Saidi. 2011. “Asymptotic Properties of Weighted Least Squares Estimation in Weak PARMA Models.” Journal of Time Series Analysis 32 (6): 699–723. https://doi.org/10.1111/j.1467-9892.2011.00728.x.Search in Google Scholar

Francq, C., and J.-M. Zakoïan. 2007. “HAC Estimation and Strong Linearity Testing in Weak ARMA Models.” Journal of Multivariate Analysis 98 (1): 114–44. https://doi.org/10.1016/j.jmva.2006.02.003.Search in Google Scholar

Francq, C., and J.-M. Zakoïan. 2019. GARCH Models: Structure, Statistical Inference and Financial Applications. New York: Wiley.10.1002/9781119313472Search in Google Scholar

Giovanis, E. 2014. “The Turn-Of-The-Month-Effect: Evidence from Periodic Generalized Autoregressive Conditional Heteroskedasticity (PGARCH) Model.” International Journal of Economic Sciences and Applied Research 7 (3): 43–61.10.2139/ssrn.2479295Search in Google Scholar

Hipel, K., and A. I. McLeod. 1994. Time Series Modelling of Water Resources and Environmental Systems. Amsterdam: Elsevier.Search in Google Scholar

Jones, R. H., and W. M. Brelsford. 1967. “Time Series with Periodic Structure.” Biometrika 54 (3–4): 403–8. https://doi.org/10.1093/biomet/54.3-4.403.Search in Google Scholar

Katayama, N. 2012. “Chi-squared Portmanteau Tests for Structural VARMA Models with Uncorrelated Errors.” Journal of Time Series Analysis 33 (6): 863–72. https://doi.org/10.1111/j.1467-9892.2012.00799.x.Search in Google Scholar

Lund, R., and I. V. Basawa. 2000. “Recursive Prediction and Likelihood Evaluation for Periodic ARMA Models.” Journal of Time Series Analysis 21 (1): 75–93. https://doi.org/10.1111/1467-9892.00174.Search in Google Scholar

Lütkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. Berlin: Springer-Verlag.10.1007/978-3-540-27752-1Search in Google Scholar

Morgan, J., and J. Tatar. 1972. “Calculation of the Residual Sum of Squares for All Possible Regressions.” Technometrics 14 (2): 317–25. https://doi.org/10.1080/00401706.1972.10488918.Search in Google Scholar

Newey, W. K., and K. D. West. 1987. “A Simple, Positive Semidefinite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix.” Econometrica 55 (3): 703–8. https://doi.org/10.2307/1913610.Search in Google Scholar

Noakes, D. J., A. I. McLeod, and K. W. Hipel. 1985. “Forecasting Monthly Riverflow Time Series.” International Journal of Forecasting 1 (2): 179–90. https://doi.org/10.1016/0169-2070(85)90022-6.Search in Google Scholar

Pagano, M. 1978. “On Periodic and Multiple Autoregressions.” Annals of Statistics 6 (6): 1310–7. https://doi.org/10.1214/aos/1176344376.Search in Google Scholar

Reinsel, G. C. 1997. Elements of Multivariate Time Series Analysis In Springer Series in Statistics, 2nd ed. New York: Springer-Verlag.10.1007/978-1-4612-0679-8Search in Google Scholar

Salas, J. D. 1980. Applied Modeling of Hydrologic Time Series. Water Resources Publication.10.1016/0309-1708(80)90028-7Search in Google Scholar

Salas, J. D., D. C. Boes, and R. A. Smith. 1982. “Estimation of Arma Models with Seasonal Parameters.” Water Resources Research 18 (4): 1006–10. https://doi.org/10.1029/wr018i004p01006.Search in Google Scholar

Thompstone, R. M., K. W. Hipel, and A. I. McLeod. 1985. “Grouping of Periodic Autoregressive Models.” Time Series Analysis: Theory and Practice 6: 35–49.Search in Google Scholar

Ursu, E., and P. Duchesne. 2009. “Estimation and Model Adequacy Checking for Multivariate Seasonal Autoregressive Time Series Models with Periodically Varying Parameters.” Statistica Neerlandica 63 (2): 183–212. https://doi.org/10.1111/j.1467-9574.2009.00417.x.Search in Google Scholar

Ursu, E., and J.-C. Pereau. 2016. “Application of Periodic Autoregressive Process to the Modeling of the Garonne River Flows.” Stochastic Environmental Research and Risk Assessment 30 (7): 1785–95. https://doi.org/10.1007/s00477-015-1193-3.Search in Google Scholar

Vecchia, A. 1985a. “Maximum Likelihood Estimation for Periodic Autoregressive Moving Average Models.” Technometrics 27 (4): 375–84. https://doi.org/10.1080/00401706.1985.10488076.Search in Google Scholar

Vecchia, A. 1985b. “Periodic Autoregressive-Moving Average (PARMA) Modeling with Applications to Water Resources 1.” JAWRA Journal of the American Water Resources Association 21 (5): 721–30. https://doi.org/10.1111/j.1752-1688.1985.tb00167.x.Search in Google Scholar

Vecchia, A. V. 1985c. “Periodic Autoregressive-Moving Average Modeling with Applications to Water Resources.” Journal of the American Water Resources Association 21 (5): 721–30. https://doi.org/10.1111/j.1752-1688.1985.tb00167.x.Search in Google Scholar

Received: 2021-03-28
Revised: 2021-09-07
Accepted: 2021-12-27
Published Online: 2022-01-31

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 5.6.2023 from https://www.degruyter.com/document/doi/10.1515/jtse-2021-0022/html
Scroll to top button