Abstract: We consider the local identification of parameters in structural VAR models with ARCH type errors. By establishing a mapping between the structural and reduced-form models, we provide a set of sufficient conditions for the joint identification of all parameters. Under these conditions, as the structural parameters are identified, various restrictions on the parameters can be tested in a standard manner. For example, the significance test for the ARCH effect in the usual GARCH formulation for a structural shock does not suffer the complications caused by a lack of identification encountered in univariate GARCH models.
5 Appendix A: proofs
We first introduce some notation. Let be size- identity matrix, be the ith column of and be the sub-matrix of with the ith column deleted. For a size- symmetric matrix , let be the vector of the lower triangular elements (column wise) of . There is a matrix such that
Further, let be the invertible matrix such that for any matrix . In fact, ] satisfies , , and for any matrices and . When is invertible, is invertible and the inverse is . The properties of and can be found in Abadir and Magnus (2005, 299–317). We also definex
Note that gives the columns of , while removes the columns of , where is of size for any . Notice also , and . Let denote the infinitesimal difference operator for an infinitesimal partial change in . The vectorisation equalities are useful: where a, b and c are any multiplication-conformable matrices.
Proof of Proposition 1. We use the same argument as the Proposition 2.1 of Engle and Kroner (1995). First, any change in leads to a change in the conditional variance process . Because is positive definite with being its unique Cholesky factor, any change in alters too. Second, any change in causes changes in for some i (with probability one as is a vector of continuous random variables) that in turn leads to a change in . The same argument shows that any change in also alters . Finally, as and are different stochastic processes, the effects of any changes in , and cannot offset each other. Hence, observationally equivalent cannot have different parameter values. ■
5.1 Derivation of sub-Jacobians
Denote . When is positive definite, is the lower triangular Cholesky factor of . The mapping from to is unique with an invertible Jacobian .4 For with a partial change in , we have
The partials can be expressed as
For with a partial change in , we find
which, together with , implies
where is the ith row of . For the mapping , as is sparse, the relationships and are useful. Clearly, leads to
To see the partial of w.r.t. we write , where are sparse matrices with the element being and zeros elsewhere. It follows that
These imply
where
when is block diagonal. For , using
we find
and
In summary, the Jacobian can be written as
In what follows, we also refer the right-most factor as the Jacobian when no confusions can arise. ■
In , let
be the columns associated with for , i.e., associated with the non-zero columns of in (see Proof of Lemma 1 below).
Proof of Proposition 2. We only need to show that the first two equations in (6) are uniquely solved by under the stated conditions. Then, follows because is of full column rank. Hence, we focus on the case without the GARCH effect, i.e., the case described by the first two equations of (6). We show that the Jacobian is of full column rank under the stated conditions. By Lemma 1, and have full column ranks and are independent of each other. Lemma 1 also implies that the columns of are independent of the columns of with non-zero columns in . When all are non-zero, Lemma 1 indicates that has full rank and hence the Jacobian has full column rank. When one of is zero, say , precisely columns of become zero; these are the columns associated with the columns of in (8). By Lemma 2, the columns in that are associated with zero columns of are independent of those in and of each other. The columns in that are associated with non-zero columns of are also independent of those in . Therefore, the Jacobian has full column rank. ■
Lemma 1. The column ranks of, andare, and, respectively, where() is the number of non-zero elements in. The columns ofare independent of the non-zero columns of.
Proof of Lemma 1. When looking at the column ranks, we may focus on the right-most factor of (7) and refers it as . The rank of is because . We note that
where is the ith row of and is a vector of zeros. The sub-Jacobian is of size with all zeros at the rows. The rank of is because each column and each row have no more than one non-zero elements and for each non-zero there are precisely non-zero columns. As is invertible, the rank of is . That the diagonals (ones) of are at the rows of , which are associated with the zero rows of the non-zero columns of , implies that the columns of cannot be spanned by the columns of , leading to the last statement of Lemma 1. ■
Lemma 2. The columns ofare linearly independent of each other and independent of those of.
Proof of Lemma 2. Note that
is a re-allocation of the elements of (a -vector) in a -vector, where for and is a size- vector of zeros. When , the vector starts with the block that is followed by zeros. When , there is no trailing zeros in the vector. Note also that . For example, when with ,
The columns of in (8) are linearly independent of each other because they are simply the re-allocations of the columns of in -vectors. The columns of is only related to the ith column of , as the non-zero elements of other columns of are associated with the zero rows of . Without losing generality, we consider the case with , for which the first column of is . The first rows of form an invertible matrix because the determinant of the matrix is equal to the (1,1) cofactor of and the cofactor is non-zero owing to the fact that and are invertible and the diagonals of are ones. This implies that the columns of are independent of each other and . Further, the non-zero elements of the columns of correspond to the zero rows of , implying that the columns of are independent of the columns of . Hence, the claim is true for . For an arbitrary , the argument is the same but involves more notation.■
6 Appendix B: an example
As an example, for the case of , the Jacobian and other matrices of interest are given explicitly as follows.
as , and
where are the elements in of . When one of is zero the Jacobian is clearly of full column rank under the conditions of Proposition 2.
References
Abadir, K. M., and J. R.Magnus. 2005, Matrix Algebra (Econometric Exercises, Vol. 1). New York, USA: Cambridge University Press.10.1017/CBO9780511810800Search in Google Scholar
Bernanke, B.1986. “Alternative Explorations of the Money–Income Correlation.” Carnegie-Rochester Series on Public Policy25:49–99.10.1016/0167-2231(86)90037-0Search in Google Scholar
Blanchard, O. J., and P.Diamond. 1989. “The Beveridge Curve.” Brookings Papers on Economic Activity20:1–76.10.2307/2534495Search in Google Scholar
Blanchard, O. J., and D.Quah. 1989. “The Dynamic Effects of Aggregate Demand and Supply Disturbances.” American Economic Review79:655–73.Search in Google Scholar
Bollerslev, T., R. F.Engle, and D. B.Nelson. 1994. “ARCH Models.” In Handbook of Econometrics, Vol. IV, edited by R. F.Engle and D. L.McFadden, 2959–3038. Amsterdam, The Netherlands: Elsevier Science.10.1016/S1573-4412(05)80018-2Search in Google Scholar
Caporale, G. M., A.Cipollini, and P. O.Demetriades.2005. “Monetary Policy and the Exchange rate During the Asian Crisis: Identification through Heteroscedasticity.” Journal of International Money and Finance24:39–53.10.1016/j.jimonfin.2004.10.005Search in Google Scholar
Dungey, M., G.Milunovich, and S.Thorp. 2010. “Unobservable Shocks as Carriers of Contagion: A Dynamic Analysis Using Identified Structural GARCH.” Journal of Banking and Finance34:1008–21.10.1016/j.jbankfin.2009.11.006Search in Google Scholar
Engle, R. F., and K. F.Kroner. 1995. “Multivariate Simultaneous Generalized ARCH.” Econometric Theory11:122–50.10.1017/S0266466600009063Search in Google Scholar
Klein, R., and F.Vella. 2010. “Estimating a Class of Triangular Simultaneous Equations Models without Exclusion Restrictions.” Journal of Econometrics154:154–64.10.1016/j.jeconom.2009.05.005Search in Google Scholar
Lanne, M., H.Lutkepohl, and K.Maciejowska. 2010. “Structural Vector Autoregressions with Markov Switching.” Journal of Economic Dynamics and Control34:121–31.10.1016/j.jedc.2009.08.002Search in Google Scholar
Lewbel, A.2010. “Using Heteroskedasticity to Identify and Estimate Mismeasured and Endogenous Regressor Models.” Boston College Working Papers in Economics 587, revised 15 Dec 2010.Search in Google Scholar
Normandin, M., and L.Phaneuf. 2004. “Monetary Policy Shocks: Testing Identification Conditions Under Time-Varying Conditional Volatility.” Journal of Monetary Economics51:1217–43.10.1016/S0304-3932(04)00069-8Search in Google Scholar
Prono, T.2008. “GARCH-Based Identification and Estimation of Triangular Systems.” Federal Reserve Bank of Boston Working Paper QAU 08–4.Search in Google Scholar
Rigobon, R.2003. “Identification through Heteroskedasticity.” The Review of Economics and Statistics85:777–92.10.1162/003465303772815727Search in Google Scholar
Rigobon, R., and B.Sack. 2003. “Spillovers across U.S. Financial Markets.” Finance and Economics Discussion Series 2003–13, Board of Governors of the Federal Reserve System (U.S.).10.3386/w9640Search in Google Scholar
Rothenberg, T. J.1971. “Identification in Parametric Models.” Econometrica39:577–91.10.2307/1913267Search in Google Scholar
Sentana, E., and G.Fiorentini. 2001. “Identification, Estimation and Testing of Conditionally Heteroskedastic Factor Models”. Journal of Econometrics102:143–64.10.1016/S0304-4076(01)00051-3Search in Google Scholar
Sims, C. A.1980 “Macroeconomics and Reality.” Econometrica48:1–48.10.2307/1912017Search in Google Scholar
Wright, P. G.1928. The Tariff on Animal and Vegetable Oils.New York: Macmillan.Search in Google Scholar
- 1
Identification via constraints placed on variances was first considered by Wright (1928).
- 2
- 3
Sentana and Fiorentini (2001) use a two-step estimation approach. Based on the unconditional variance, their first step provides an estimator of the matrix, which is used as input for the second step that produces the estimators of the conditional variance parameters. They acknowledge that, when the factor dimension is greater than or equal to two, “…the two-step estimator of (parameters in the conditional variance) will be inconsistent” (see paragraph 2, 150).
- 4
Although is lower triangular (not symmetric), we still use vech() to denote its lower triangular elements when no confusions can arise.
©2013 by Walter de Gruyter Berlin / Boston