Taking a DSGE Model to the Data Meaningfully

All economists say that they want to take their models to the data. But with incomplete and highly imperfect data, doing so is difficult and requires carefully matching the assumptions of the model with the statistical properties of the data. The cointegrated VAR (CVAR) offers a way of doing so. In this paper we outline a method for translating the assumptions underlying a DSGE model into a set of testable assumptions on a cointegrated VAR model and illustrate the ideas with the RBC model in Ireland (2004). Accounting for unit roots (near unit roots) in the model is shown to provide a powerful robustification of the statistical and economic inference about persistent and less persistent movements in the data. We propose that all basic assumptions underlying the theory model should be formulated as a set of testable hypotheses on the long-run structure of a CVAR model, a so called "theory consistent hypothetical scenario". The advantage of such a scenario is that it forces us to formulate all testable implications of the basic hypotheses underlying a theory model. We demonstrate that most assumptions underlying the DSGE model and, hence, the RBC model are rejected when properly tested. Leaving the RBC model aside, we then report a structured CVAR analysis that summarizes the main features of the data in terms of long-run relations and common stochastic trends. We argue that structuring the data in this way offers a number of "sophisticated" stylized facts that a theory model should replicate in order to claim empirical relevance.


Introduction
The aim of this paper is to demonstrate that the cointegrated VAR model, when correctly speci ed, can be used as a general framework for assessing the empirical relevance of most of the (explicitly or implicitly stated) basic assumptions behind an economic theory model. The idea is to test as many as possible of these assumptions prior to forcing them onto a theory-restricted empirical model. Thus, we use the statistical model to nd out, prior to the speci cation of the economic model, which assumptions are tenable with the economic reality. The advantage is that it allows us to modify the untenable parts of the theory model (or choose another model altogether) so as to bring the model closer to the economic reality. This is contrary to an approach where the data from the outset are squeezed into the straightjacket of a theoretical model with its numerous untested assumptions with the risk that signals in the data suggesting a di erent set of economic mechanisms will be overlooked. The (log-linearized) real business cycle (RBC) model by Peter Ireland (2004) (hereafter PI) 1 provides an illustration. Even though the empirical results of this paper suggest that the RBC assumption (that the technology shocks are the primary source to the business cycles) is strongly rejected when the data are allowed to speak freely, this assumption works reasonably well in PI. Thus, it might be empirically hard to reject an incorrect assumption in a model that is designed to replicate such an assumption.
The idea in PI for taking the model to the data is to allow for a rst order AR residual process in the theoretical DSGE model, to rewrite it in state space form, and estimate it by maximum likelihood. The model is estimated under the assumption that all variables are trendstationary, but reports a root of 0.9987 which, in practice, is indistinguishable from a unit root. The consequence of this was discussed in Johansen (2006) showing that standard asymptotic distributions provide very poor approximations to the nite sample distributions of the estimated steady-state values. This is because the convergence of the nite sample distribution to the Gaussian distribution is extremely slow when the model contains a near unit root. Thus, the cost of treating a near unit root as stationary is that standard inference may be completely unreliable unless we have a very long time-series. The inferential problems related to the near unit root were demonstrated for a constant parameter model with independent Gaussian er-rors, which was assumed to correctly describe the data (the US economy in the last four decades). If this assumption is not correct, then standard inference will be even more hazardous. This is true. in particular, when the errors are not independent as all asymptotic t; 2 ; and F tests are only valid for independent normal errors.
Therefore, to avoid unreliable statistical inference, it seems important to assess as many as possible of the explicit and implicit assumptions underlying a theoretical model prior to the nal estimation of the full model. But to be able to test the basic assumptions without forcing the data onto the chosen theory model we need an empirical framework which is general enough to encompass the major features of the theory model as well as other competing models. Since the VAR model, when correctly speci ed, essentially represents the information in the data, it is likely to be a good candidate for such a framework. But, because it is linear in the parameters, it requires that the often highly nonlinear theory model can be adequately approximated by a log-linearized version.
To avoid the risk of fragile inference discussed in Johansen (2006), we propose that near unit roots are approximated with unit roots, so that the baseline model becomes the Cointegrated VAR (CVAR). This allows us to distinguish between (1) long and persistent movements away from the long-run linear growth path approximated by the estimated stochastic trends (the long business cycles), and (2) shorter, less persistent deviations from steady-states approximated by the estimated cointegration relations (the short cycles). Within this framework, we suggest that all basic assumptions underlying a DSGE model are formulated as a set of testable hypotheses on the cointegration and common trends properties, a so called`hypothetical scenario'. The advantage of such a scenario is that it forces us to formulate all testable implications of the basic hypotheses underlying a theory model. This is contrary to the practise of focussing on single hypotheses, which only make sense in isolation but not in the full context of the model. Thus, the scenario can be seen as a safeguard against testing internally inconsistent hypotheses.
The organization of the paper is as follows: Section 2.1 presents the basic features of the RBC model and Section 2.2 presents the DSGE method suggested in PI in order to take the model to the data. Finally, Section 2.3 takes a closer look at some of the untested assumptions of the DSGE model and nds that they are generally untenable with the empirical information in the data. Section 3 lists all basic assumptions underlying the DSGE model in PI and demonstrates that they can be formulated as testable restrictions on the common trends representation of a VAR model. Section 4 speci es an empirical VAR model that is carefully checked for misspeci cation. Strong evidence of parameter non-constancy necessitates a split of the sample period around 1979. Since the estimated sub-period VAR models passed the misspeci cation tests, they were considered an adequate description of the data. Section 5 discusses the numerous hypotheses derived from the DSGE model in PI, demonstrates how to test them, and nds that the empirical content of the RBC model is generally very weak. In the last part of the paper, we depart from the RBC model and, in a more explorative analysis, exploit the cointegration and common trends information in the data. Sections 6.1 reports a data consistent long-run structure of two identied cointegration relations and their adjustment dynamics, Section 6.2 the estimated common stochastic trends and their loadings in the data, and Section 6.3 discusses whether the data had anything useful to say. Section 7 concludes with a discussion.
2 The DSGE model in Ireland (2004) The assumption that the aggregate technology shock alone drives all business cycle uctuations is a key feature of real business cycle models. From a theoretical point of view this may be a useful assumption as it serves to isolate the e ects of technological innovations in a stylized economy. However, the`one shock' assumption makes the model stochastically singular, implying that certain linear combinations of the endogenous variables evolve in a deterministic fashion. This is obviously a problem if we want to use the model to analyze real data: any attempt to estimate a stochastically singular model will lead to poor results.
Therefore, when taking this model to the data the literature has proceeded in two directions: some authors (Bencivenga, 1992, Ingram et al., 1994, DeJong et al., 2000, Kim, 2000 introduce additional structural innovations until the number of shocks equals the number of endogenous variables, others (Altug, 1989, McGrattan, 1994, Hall, 1996, McGrattan et al., 1997 augment the theoretical equations with a serially correlated residual that is assumed to account for measurement errors as well as the variation in the data not captured by the 'one shock' assumption. The method proposed by PI for transforming the RBC model into a DSGE model in order to take it to the data follows the second line of reasoning.

The basic RBC structure
The economy is described by the real business cycle model in Hansen (1985) where a representative agent maximizes expected utility by choos-ing between consumption, C t ; and total hours worked, subject to a constant returns to scale technology described by the Cobb Douglas production function: where Y t is gross output, K t is capital stock, > 1 measures the rate of labor-augmented technological progress and A t is total factor productivity. The following two identities complete the model: where capital is de ned as capital last period, K t 1 ; corrected for the depreciation rate plus investment I t at time t; and: where gross output is the sum of consumption and investment. The rst order conditions for the model are given by: and

The proposed method
The model described by (1)-(6) is highly nonlinear. To be able to take it to the data, PI log-linearizes the theoretical model around its theoretical steady-state value. Taking the log of (2) leads to: where b 1 =ln and lower cases denote logarithmic transformations. The total factor productivity, a t ; is assumed to follow a rst order autoregressive model: with j j < 1 and " t N I(0; 2 " ): The theoretical model assumes that output, consumption, investment and capital share the same growth rate given by the labor augmenting technological progress . It then follows that the trend adjusted variables and a t are stationary around their steady state values y, c, i, k, h, and a.
Log linearizing (5) and (6) gives us: and One can rewrite the linearized system in matrix form 2 as and where s t = [k t ;â t ] 0 and f t = [ŷ t ;ĉ t ;ĥ t ] 0 contain the log deviations of the de-trended variables from their steady state values and are functions of the parameters of the model. The model is stochastically singular because there is an exact linear relation among the variables in f t ; such that d 0 C = 0; implying d 0 f t = 0, where d is a 3 1 vector. This, of course, is just the consequence of assuming that " t is the only source of randomness in the model.
As an empirical description of the data this is clearly too restrictive and PI's method consists of augmenting each equation in (12) with a serially correlated error term, so that the model to be taken to the data becomes and where t is assumed to be N I(0; V ) and uncorrelated with " t at any lag.
In PI the structural parameters are constrained to satisfy the restrictions implied by theory, and are calibrated and xed to the values suggested by Hansen (1985), the eigenvalues of A ( and a 1 ) and the eigenvalues of D ( D 1 ; D 2 and D 3 ) are constrained to be less than one in modulus and the covariance matrix V is constrained to be positive de nite. Maximum likelihood estimates of model (14)-(16) are then calculated by using the Kalman lter and a nonlinear optimization routine.

Are the assumptions empirically defendable?
The reported estimates are claimed to be maximum likelihood estimates. These estimates, however, are only relevant given that the assumed model is a correct representation of the data. There are many explicit (and implicit) assumptions underlying PI's model. Some of them, the structural and the exogeneity assumptions, can be classi ed as predominantly economic, whereas others, the stationarity and the distributional assumptions, are more statistical. The following list summarizes: 1. Structural assumptions: A; B; C; D; " ; and V are constant over time.
2. Exogeneity assumptions: a t and k t are driving the system.

Stationarity assumptions:
(a) y t ; c t ; k t are trend-stationary with identical linear growth rates derived from labor augmented technological progress (b) a t and h t are stationary (c) u t = P 1 k=0 D k t k with the eigenvalues of D less than one in modulus, i.e. u t is a zero mean stationary AR(1) process. 4. Distributional assumptions: The assumption of structural parameters imply that they ought to remain constant across periods, for example when monetary and scal policy regimes change as happens around 1979. Parameter constancy over the two regimes was rejected but, as the parameter estimates were quite similar over the two periods, PI disregarded this evidence. Thus, whether the parameters are structural in the sense of describing the RBC model seems questionable already at this stage. The stationarity assumptions needed for the log-linearization around constant steady-states 3 can be assessed based on the estimates of A and D in Table 1. As already discussed, the largest root, 0.9987, is in practice indistinguishable from a unit root. But a root in D of the size 0.94 and 0.88 also suggest additional pronounced persistence in the data. Figure 1 illustrates the persistent deviations from the assumed`constant' steady-state values. It also illustrates that the deviations are either systematically positive or negative. As will be shown in Section 6.2, this is likely to be the consequence of assuming identical deterministic growth rates for output, consumption and capital when in fact they di er quite signi cantly. Table 1 reports the tests of the null of residual normality, no autocorrelation, and no ARCH. The results show that no autocorrelation is rejected for all residuals but^ y;t : Furthermore, the cross correlogram shows signi cant cross correlations between" t and each of the^ s for s > t. No ARCH is rejected for all the error terms except" t and normality is rejected for all residuals. Thus, the distributional assumptions under 4. do not seem to hold in the data and the model proposed in PI is not correctly speci ed. Therefore, the statistical inference cannot be considered reliable, and is possibly even misleading.

The business cycle model and the cointegrated VAR
To check whether the conclusions are robust to the misspeci cation detected in the RBC model we need a model which is su ciently exible to encompass the RBC model as well as other alternative models. Because the VAR model, if correctly speci ed, is a convenient representation of the information in the data (Hendry and Mizon, 1993), it is a natural choice for the purpose at hand. As a point of departure we shall, therefore, start from an unrestricted VAR model in levels, test for misspeci cation, and revise the VAR model accordingly. The next step is to formulate as many as possible of the explicit or implicit assumptions underlying the RBC model in PI as testable hypotheses within the VAR. As discussed in Juselius (2006) Chapter 2, the MA representation of the VAR model is useful in this respect. Though not all aspects of the PI model can be addressed in the linear VAR framework, many of the testable hypotheses correspond to basic conditions which are necessary for the empirical validity of the model. Thus, the assessment of the theory model would ideally proceed in two stages: First, the basic (necessary) conditions are tested and, if rejected, would imply a modi cation of (or possibly rejection of) the theoretical model, but if not rejected would imply a further testing of the remaining nonlinear conditions. Before illustrating how to translate the basic assumptions underlying the PI model into testable hypotheses within the VAR model we need to address one complication. PI argues that capital, k t ; is unobservable and, based on Kalman ltering of the RBC model, generates a series for capital assuming that = 0:975 in (3). According to (13), shocks to capital and total factor productivity are identical and a t and k t are, therefore, deterministically related. Thus, including both of them in the VAR model leads to stochastic singularity.
Another problem with the choice of Ireland's simulated capital stock variable is that it essentially has been designed to conform with the assumed model. Thus, we shall deviate from PI by analyzing an observed capital stock formation series, rather than the simulated series. As the former is a ow rather than a stock variable we could have created K t as in (3) using = 0:975. However, the correponding variable would be very close to I(2) and would be excludable from the cointegration relations from the outset. This is because none of the other variables is even close to being I(2). Thus, capital stock formation seems the only possible choice in this context.
There are three di erent measures of capital stock formation to choose between in the OECD database Economic Outlook. Figure 2, upper panel, shows the graphs of Ireland's capital stock variable together with "the capital stock of the business sector, the private xed capital formation and the gross xed capital formation. These observed variables are only available from 1960:1 onwards and the empirical checking will be based on a slightly shorter sample period than in PI. All series are per capita, in logs and normalized by subtracting the rst observation (1960:2) from the series to facilitate a graphical comparison. We note that the PI simulated per capita capital stock variable exhibits less growth over the sample period compared to the three observed series. Thus, imposing the RBC assumption of identical linear growth rates on the data generates a variable which is di erent from any of the measured ones. Among the latter, we found private capital formation to be most adequate. The capital stock of the business sector was discarded because it was contaminated with several large unexplainable outliers which compromised the econometric interpretation of the results. There was no major econometric di erence between the choice of gross or private capital formation but we preferred the latter since it was closer to PI's variable.
The lower panel of Figure 2 shows the di erence between private capital formation and the simulated capital stock variable in PI. It is interesting to notice the strong evidence of long business cycle behavior in the o cial series, whereas no such (or very little) behavior can be seen in the simulated series. The lower panel shows that the di erential between the two series is moving in a highly persistent, non-stationary manner. To nd out how close the correspondence is between the o cially measured capital stock formation and the generated series together with the estimated TFP, we have regressed the former on the latter plus a linear trend: Tentatively, the results suggest that private capital stock formation is more closely related to the simulated TFP than to simulated capital stock and that the linear trend in the simulated capital di ers from the trend in the o cial series 4 . Altogether, this suggests that the empirical conclusions may not be robust to the choice of observed or simulated capital stock.
The RBC model de ned by (7) and (8) is driven by a deterministic trend, proxying labor augmented technological progress, and by random shocks to total factor productivity a t . The stochastic assumptions in (14), (15), and (16) make the model more exible by allowing for additional AR(1) dynamics in the short-run changes of y t ; h t ; and c t : Here we shall allow the observed k t to have a similar speci cation as y t ; h t and c t . With this modi cation, the following MA representation corresponds to the DSGE model in PI: where v t = Dv t 1 + t and 0 t = [ 1t ; 2t ; 3t ; 4t ] is IN (0; V ) and uncorrelated with " t : From (8) we note that a t = a + t 1 " 1 + t 2 " 2 + ::: + " t 1 + a 0 t + " t ; i.e. it corresponds to a stochastic unit root trend only if = 1: Since one of the roots was very close to one (0:9987) we will initially assume that the VAR model contains at least one stochastic trend and, hence, at most three cointegration relations. Thus, treating a t as a unit root process allows us to distinguish between relations which exhibit pronounced persistence and relations which do not.
The log-linearized rst order condition (5) can be expressed as: Provided that c t and y t is similarly a ected by the TFP stochastic trend, implying a stationary savings ratio, i.e. (c t y t ) I(0); we note that h t has to be stationary for (c t y t + h t ) to be stationary. Thus, d 11 = d 12 and d 13 = 0 in (18) is consistent with u 1;t I(0). If expectations do not deviate systematically from actual realizations, then cointegration properties will not change when replacing expected with realized values. Since c t I(0); (6) is consistent with: implying that the (log of) the income capital ratio has to be stationary for the equation to make sense when c 6 = 0. Thus, (y t k t ) I(0) is consistent with d 11 = d 14 .
Thus, the basic assumption behind the RBC model can be formulated in terms of the following theory consistent scenario: 2 It is now easy to see that (21) implies a non-stationary Cobb-Douglas function, fy t k t (1 )h t g I(1); and the following stationary relations: In this case, (c t y t + h t ) I(0) and (19) holds as a stationary condition. Thus, the stationarity of h t is crucial for the theoretical consistency of the empirical model. If, instead, h t = d 1 a t + v 3;t and hence I(1); then the cointegration implications would be the following: In this case (c t y t h t ) I (1) and (19) would no longer hold as a stationary condition. The above implications of the PI model will be tested in Sections 5 and 6.

An adequately speci ed VAR model
Consistent with the AR(1) assumption in PI, the common trends representation (18) was speci ed for a VAR(1) model. However, the lag determination tests of Table 2 clearly show that the model needs one more lag to properly account for the dynamics in the data. This, of course is not surprising as the PI model was found to have autocorrelated residuals. Thus, the more general VAR(2) is our baseline model: (0; ); t = 1; :::; T; x 1 ; x 0 given, with where y t is the log of per capita real US GDP, c t is the log of per capita real US aggregate consumption, h t is the log of per capita total hours worked in US, k t is the log of per capita real US private capital formation, and D 0 t = [D s;t ; D p;t ; D tr;t ] is a vector of dummy variables to be de ned below. The data are for a total sample of 1960:1-2002:1, spanning 42 years of quarterly observations which (due to the lack of observations on k t ) is 12 years shorter than the period used by PI. The graphs of the data are given in the Appendix.
The trend component, 1 t; needs to be restricted to the cointegration relations, 1 = 1 ; to prevent quadratic trends in (24). It works as a proxy for 'labor augmented technical progress' according to (2) and allows us to test many hypotheses involving trend-stationarity, such as the trend-stationarity of the Cobb-Douglas function.
The constant term, on the other hand, has to be unrestricted, i.e. 0 = 0 + 0 ; allowing for a constant term in the cointegration relations, 0 ; and a constant term in the equations describing the slope of the linear trends in the data, 0 : See Juselius (2006), Chapter 6, for an exposition.

Accounting for extraordinary institutional events
Ignoring the e ects of extraordinary events on the variables of the model, even though the theory model assumes away such e ects, is likely to bias the statistical inference. To distinguish empirically between ordinary and extraordinary institutional e ects we consider the former to be indistinguishable from N I(0; "x ), whereas the latter stick out as non-normal residuals. Thus, if the e ect of an event is too large to be satisfactorily explained by the VAR variables we consider it potentially to be an extraordinary event.
When the sample period is long enough, most macroeconomic variables exhibit extraordinary changes as a result of interventions, reforms, etc. The present period is no exception. With D t = 0 in (24), the model did not pass the misspeci cation tests. In particular, the normality tests failed miserably (as they did in PI) due to a number of outliers which coincided with important institutional event. To account for them the following dummy variables were included in the model: where D 0 s;t = [Ds7801]; D 0 p;t = [Dp7003; Dp7403; Dp7404; Dp7801]; and D tr;t = [D tr 8001]: They are de ned by Ds7801 = 1 for t 1978:1, 0 otherwise, DpY Y xx = 1 in year 19YY:xx, 0 otherwise, and D tr 8001 = 1 in 1980:1, -1 in 1980:3, 0 otherwise. To avoid broken linear trends in the data the shift dummy is restricted to be in the cointegration relations, s D s;t = ' 0 D s;t ; whereas p D p;t and tr D tr;t are unrestricted in the VAR 5 . The estimated e ects reported in Table 2  The misspeci cation tests in Table 2 show that the model passed the normality and autocorrelation test, but not the ARCH test. The latter is probably the result of higher variability of macroeconomic variables in the seventies as compared to the rest of the sample. As the VAR results are fairly robust to moderate ARCH (Rahbek et al. (2002)) we have disregarded this problem.
That the model passed the misspeci cation tests reasonably well does not yet imply parameter constancy and the next section will check the stability of parameters over the sample period.

The constancy of parameters
Based on a battery of recursive tests the null of constant parameters was massively rejected. For example, Figure 3 shows that the test whether 2 sp( (t 1 ) ); t 1 = T 1 ; : : : ; T , where~ is the estimated cointegration vectors based on the period 1981:2-2002:2, was rejected for all recursive samples 1 T 1 ; where T 1 =1996:2,...,2002:2. Therefore, we perform the analysis separately for the periods 1960: 1-1979:4 and 1981:2-2002:1 Test of Beta(t) = 'Known Beta' 1966 1969 1972 1975 1978 1981 1984 1987 1990 1993 1996 1999 2002  which is the same split as in PI. To avoid the outlier observations in 1980, the second period starts at 1981:2. The recursive constancy tests for the sub-period models are by necessity based on fairly few observations and whether parameters are truly constant or not is di cult to establish with great con dence. Since there was no obvious sign of parameter non-constancy, we consider the data generating process to be reasonably constant within the two periods.
Because the previous tests results were derived under the incorrect assumption of constant parameters, we need to check the sub-model speci cations once more. Using the same deterministic components as in the full model, all tests improved (except for ARCH in the rst period): the multivariate normality test passed with a p-value of 0.78 in the rst period and 0.77 in the second; the multivariate LM(1)/LM(2) test for no autocorrelation with a p-value of 0.57/0.30 in the rst period and 0.24/0.40 in the second; the multivariate ARCH(1)/ARCH(2) test with

Rank determination
According to the theoretical scenario (18) we would expect one stochastic trend and one deterministic common trend. However, an additional root was close to unity suggesting the possibility of one more common stochastic trend in the empirical model. As already discussed, leaving a near unit root in the model is likely to make some inference unreliable.
Preferably, unless the sample period is very long, such roots should, therefore, be approximated with unit roots. As a result of the sample split, the number of observations in the two periods is not very large and the power of the trace test to reject the null of a unit root for stationary alternatives close to the unit circle is likely to be low. Therefore, we also provide other information such as the (modulus of) the largest unrestricted characteristic root ( max ) for all choices of r and the highest t-value of estimated rj coe cients in the r th cointegration relation for r = 1; :::; 4. The results are reported in Table 3 where i are the eigenvalues based on which the trace test is calculated. The trace test has been small sample Bartlett corrected (Johansen, 2002) and the asymptotic tables have been simulated to account for the shift dummy in the cointegration relations (Nielsen, 2004) in the rst period.
The results in Table 3 generally suggest a rank of two for both periods, albeit a rank of 1 could have been chosen in the rst period based on the trace test. A choice of three cointegration relations would leave a fairy large root (0.90/0.96) in the model. In this case, the long-run cointegration structure (22) was roughly acceptable for both sub-periods, but the three cointegration relations exhibited strong evidence of both deterministic and stochastic trends. Thus, allowing a near unit root in the cointegration space would make the subsequent stationarity testing somewhat illusory and would have blurred the distinction between the persistent movements in the data (the long business cycles) and the stationary movements around steady-state (the short cycles). Thus, we nd the choice of r = 2 to be econometrically more defendable.

Testing hypotheses
The recursive tests rejected parameter constancy over the full sample period and we shall, therefore, primarily focus on the analysis of the two sub-periods. To be able to compare our results with the PI results we will also report the estimates of the full period, even though from a statistical point of view the latter may not have a meaning. Furthermore, allowing for two driving trends, rather than just one, means that the testing of the assumptions in Section 2.3 becomes less straightforward. To make the cointegration implications of the choice r = 2 more transparent we reformulate the scenario from Section 3 to include one more stochastic trend: 2 where at this stage the rst stochastic trend is assumed to be the same as in (18), i.e. X u 1;i = a t : We note that the inclusion of a second stochastic trend is likely to change the hypothetical cointegration properties. For example, c t y t is stationary only if d 11 = d 12 and d 21 = d 22 : The tests performed below will address the question (a) whether the timeseries properties of at least some of the variables correspond to what is theoretically assumed, (b) whether cumulated shocks to the TFP is likely to be one of the driving trends, and (c) whether the hypothetical steady-state relations are strongly mean reverting.

General hypotheses
We start with the following hypotheses: 1. the trend is excludable from the long run relations, 2. h t is stationary around a constant mean, 3. y t ; c t and k t are (trend)stationary, 4. a t and, hence k t ; act as one of the main driving forces in the model.
The trend was found to be strongly signi cant in the full sample period and in period I, whereas not in period II (p-value = 0.34). However, when the model was estimated without a trend in period II, the characteristic polynomial exhibited a fairly large root (0.92) for r = 2. Careful checking suggested that the data in period II contain two stochastic trends (corresponding to roots of 1.01 and 0.96) and, in addition, a complex pair of roots (0.87 0:22i): The latter (persistent cycle) was almost exclusively present in h t and could, therefore, not be associated with any one of the remaining variables. Thus, we can continue with two cointegration relations and accept that they exhibit fairly persistent swings, or conclude there is no cointegration between the variables in Period II. We have chosen the former alternative, albeit recognizing that the stationarity testing allows for fairly persistent movements in the relations. The result of testing the hypothesis under point 2. is given in the column h t in the upper part of Table 4. The test, distributed as 2 ( 1 ); shows that stationarity is rejected in both sub-periods and in the full sample. This implies that h t has been in uenced by at least one of the stochastic trends, possibly both, and the hypothesis that h t is stationary around a constant steady-state value is strongly rejected. All tests of (trend)stationarity under point 3. were rejected both in the full period and in the subperiods. In period II, the tests of stationarity is in a model without a trend in the long-run relations.
The hypothesis under point 4. states that the main driving stochastic force in this system is given by the cumulated shocks to k t . This can be formulated as the hypothesis that k t is weakly exogenous and can be tested as a zero row in in the equation for k t : The weak exogeneity test results in the middle part of Table 4 show that the weak exogeneity of private capital formation is strongly rejected. Thus, the hypothesis that shocks to capital is one of the driving stochastic trends seems untenable with the information in the data, whereas the weak exogeneity of per capita consumption cannot be rejected in any of the sub-periods. The latter suggests that cumulated shocks to aggregate consumption have been one of the driving forces in this period, a result which contradict the key assumption of the real business cycle model. In the rst sub-period, labor is borderline acceptable as weakly exogenous and the second stochastic trend seems primarily associated with shocks to labor, whereas in the more recent period the second trend cannot be associated with the shocks to a particular variable. As a complement to the weak exogeneity tests, Table 4 reports the results of testing a unit vector in ; which, if accepted, implies that the variable in question has been purely adjusting (Johansen, 1996, Juselius, 2006. The results show that private capital formation has been purely The consumption/income ratio in the second period Figure 5: The US savings rate over period I and II adjusting independently of the period chosen 6 and implies that unanticipated shocks to capital have not had any permanent e ects on the other variables of the system. This is strongly underpinning the previous conclusion that the main the RBC assumption has little empirical support in the data.

Some structural hypotheses
The following tests are associated with the stationary/nonstationary behavior of the assumed equilibrium errors of the RBC model: 1. a t is nonstationary (i.e. contains a near unit root), 2. y t c t is (trend)stationary, 3. y t c t h t is (trend)stationary, 4. y t k t is (trend)stationary,

c t bH t is (trend)stationary.
A direct test of the nonstationarity of a t under point 1. is less straightforward as a t is not directly observable. However, if we consider total factor productivity to be a unit root process (0:998 ' 1:0); then a t = y t k t (1 )h t b 1 t should be nonstationary. This hypothesis has been imposed on one of the cointegration relations, leaving the second relation unconstrained. 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 -5.20 -5.15 -5.10 Consumpt ion-Income-labor (log transform ed) 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 0 Income-Capit al (log transform ed) Figure 6: Graphs of the log-linearized rst order conditions. Table 5 reports the test results of the hypotheses 1-5 and the estimated coe cients with the t-ratios based on asymptotic standard errors in square brackets.The stationarity of the Cobb-Douglas function was rejected for the full sample period and the estimated value of is not consistent with the theoretical assumption that 1:0: In the subperiods, stationarity of the Cobb-Douglas function cannot be rejected. However, the estimated values of ; 0.35 in the rst periods and 0.61 in the second, are not very close to the one reported by PI (approximately 0.22). Restricting to be 0.22 in our VAR model would, by construction, make the Cobb-Douglas production function non-stationary both in the full sample period and in the sub-periods, so that a t would be a near-unit root process consistent with the results in PI. Figure 4, upper panel, shows a time graph ofâ t = y t 0:78h t 0:22k t 0:0027t 7 ; where is xed at the estimated value in PI, but the trend coe cient has been estimated. As expected, the stationarity ofâ t was rejected based on 2 (3) = 19:8364[0:0002]: The stationarity of the income-consumption ratio, i.e. the US savings rate, was rejected for all sample periods. This is supported by the graphs in Figure 5 exhibiting pronounced persistence over time. Since h t was found to be nonstationary, there is a possibility that the non-stationary savings rate and h t are cointegrated according to the log linearized rst order conditions (19). Table 5, third panel, reports the stationarity tests of this hypothesis formulated under point 3. above. It is rejected for all samples.
The stationarity of the income-capital relation in (20) formulated under point 4. was also rejected for all periods as shown in the forth panel of Table 5. Figure 6 illustrates the persistence of the implied relations. An interesting feature of the consumption-income-labor ratio is its strong decline in the more recent period, suggesting that the economic mechanisms have undergone a fundamental change that the present model (and the chosen data) cannot explain. This feature of the data will become even more evident in the next section.
Point 5. is related to the structural relationship (1) describing an agent's utility of choosing between log consumption and labor. Given the (near) unit roots in the data and the idea of distinguishing between highly persistent and less persistent directions in the data vector, it seems relevant to ask whether highly persistent deviations from equilibrium utility could be consistent with the underlying logic of the RBC model. As we consider highly persistent equilibrium errors to be implausible if the RBC model is correct, our null hypothesis is that ln C t H t should be stationary around a constant mean and a trend.
It is, however, not obvious how to test the utility function in our empirical VAR model as (1) is not explicitly part of the log-linearized version of the RBC model. One possibility is to test for cointegration betweenĉ t and ln H t as it can be shown that the time-series behavior of ln H t and H t is essentially identical when correcting for the di erent scales: However, with ln H t we would not be able to directly compare the estimated coe cient of in PI with the estimated cointegration coe cient. Therefore, the cointegration results in the lower part of Table  5 are based on a VAR model where ln H t has been replaced by H t : The test results show that the cointegration implications of (1) failed to obtain empirical support whether based on the full sample period or the sub-samples. For the full period and the second period the estimated coe cient to labor is of incorrect sign, whereas in Period I the estimated coe cient, though correctly signed, is much smaller than the coe cient, 0.0046, estimated by PI . Stationarity is rejected for all three periods. The time graph of c t 0:0046H t 0:0036t 8 reported in the lower panel 4.43(2) 0.0350 of Figure 4 exhibits typical nonstationary behavior. Thus, the conclusion that the RBC model is largely untenable with the information in the data seems robust. In the next two sections we shall, therefore, abandon the RBC model and instead report a more exploratory analysis based on a structuring of the data into cointegration relations versus common stochastic trends and interpreting them in terms of the pulling and pushing forces of the underlying data generating process.
6 What does the data tell if allowed to speak freely?

The pulling forces
As the recursive tests strongly indicated a structural break at around 1979 it does not make sense to estimate the full period model and we will here only report the sub-period results. To preserve the data information, we only impose restrictions which are acceptable with fairly high p-values. As a comparison of similarities and di erences between the two periods is of some interest we shall impose identifying restrictions which are as similar as possible between the two periods. Table 5 showed that the stationarity of the Cobb-Douglas production function was accepted with fairly high p-values in the two sub-samples and the rst cointegration relation is identi ed by imposing homogeneity between output, capital and labor and a zero restriction on consumption. Fixing one cointegration relation means that the scope for a second interpretable relation is very limited as the two vectors have to be in sp( ): In both periods, the second cointegration vector seemed primarily to describe the US savings rate, so homogeneity between y t and c t was imposed on 2 . However, the consumption/income ratio has exhibited pronounced persistence over the two sample periods as evidenced by Figure 5 and needs to be combined by another variable to achieve stationarity.
In period I the coe cients to labor and capital were almost equal with opposite sign, so homogeneity between capital and labor was additionally imposed. In period II, the savings rate was found to be cointegrated with capital. The estimates reported in Table 6 de ne irreducible cointegration relations in the sense of Davidson (1998).
The estimates of the Cobb-Douglas production function suggest a shift in the labor/capital share of output over the two periods with capital becoming more dominant with time. In period I the estimated trend coe cient is consistent with TFP growing linearly with 1.3% per year. In period II the trend was excludable altogether from the cointegration relations. This might suggest that the deterministic trend (in labor aug- mented technical progress) has in uenced capital and output similarly (with identical slope coe cients) in period II, whereas not in period I. In period I, the consumption/income ratio is negatively related to the capital-labor ratio suggesting that US investment was primarily nanced by domestic savings in this period. The trend estimate is consistent with an annual increase in the consumption/income ratio of approximately 0.28 % when the e ect of the capital/labor ratio has been accounted for. In period II the savings rate and capital are also negatively related, but the coe cient to capital is now smaller. This might be evidence of the increased US reliance on foreign savings. The adjustment dynamics can be inferred from the estimated coefcients: In the rst period it is interesting to note that output is increasing in the equilibrium error of the Cobb-Douglas production function, though not very signi cantly so, whereas it is equilibrium correcting to the income/consumption relation. As the equilibrium correction in the second relation is much stronger than the error increasing e ect in the rst, the overall behavior is stable. The equilibrium correcting behavior of capital both to both the savings rate relation and the Cobb-Douglas relation adds to the stability of the system. Thus, even though an increase in consumption relative to income tend to increase output in the short run, a declining savings rate will be associated in the longer run with a declining capital formation/labor ratio, and the lower capital formation will eventually bring output back towards its steady-state value. Neither labor, nor consumption are equilibrium correcting consistent with the weak exogeneity results of Table 5. Even though it is shocks to consumption as well as labor that have been pushing the economy, the estimates in (30) and (31) suggest that it is the consumption shocks that have generated the long stochastic cycles in output. Thus, the empirical results are primarily in favor of a demand driven business cycle. The adjustment dynamics of the second period are quite di erent. Output is now equilibrium error correcting both to the Cobb-Douglas relation and to the consumption/income relation. Capital has also been strongly equilibrium correcting to both relations, which it also was in the rst period, whereas the signi cant adjustment in hours worked to the consumption/income relation is a new feature. The estimated coe cient shows that hours worked has increased when the consumption/income ratio has been above its steady-state value.
As a nal check we report the graphs of the two cointegration relations in Figure 7. The di erence in behavior between the two periods is striking, suggesting that the production function, consumption-income framework works reasonably well in the rst period, but is totally inadequate in the second one. 9 Obviously, to understand the economic mechanisms in the more recent (and more important) period we would need to expand the model and the data to allow for new features.

The pushing forces
While the empirical analysis of the previous section was based on the AR form and concerned with the identi cation of the long-run relations, the analysis of this section is based on the MA form and addresses directly the question which empirical shocks have generated the long business cycles in the data.
The VAR model in moving average form is given by: (28) where C = and with~ ? = ? ( 0 ? b ? ) 1 : Comparing (28) and (29) with the scenario (27) in Section 5, it appears that~ ? can be interpreted as an estimate of the loadings matrix D and 0 ? P t i=1 " i as an estimate of the common stochastic trends P t i=1 u i : As the weak exogeneity of aggregate consumption was accepted with a high p-value in Section 5, the estimates of ? ; ? ; and reported below are subject to this restriction.
where u 1;t =^ 0 ?;1 " t =" c;t and u 2;t =^ 0 ?;2 " t = 0:56 y t + 0:17 (1:8) k t + ::: + u 2;t : In both periods, the equation de ning the second autonomous shock seems primarily associated with hours worked though in period II with some additional signi cant e ects from output. The nal impact of à consumption' and`labor demand' shock on the variables is given by the loadings to the stochastic trends reported in (30) and (31). We note that the long-run impact of a consumption shock was signi cantly positive on all variables in both periods, but larger in the rst period. The nal impact of a`labor demand' shock is signi cantly negative for capital in both periods, consistent with a labor/capital substitution e ect. The ML estimates of the linear trend e ect show that the linear growth rates have generally been higher in the more recent period. Output and consumption have exhibited similar growth rates whereas capital has grown relatively more in Period II and relatively less in Period I, whereas the average over the two periods is quite close to the growth rate of income and consumption. Even though these estimates might be sensitive to the unexplained trend behavior of the more recent period, the results seem to cast some doubt on the assumption of identical growth rates in the RBC model.
Interpreting VAR residuals as structural shocks and associating them with labels such as demand or supply shocks, technology shocks, etc. is highly debatable unless one can argue convincingly that the estimated shock was unanticipated, unique and invariant. Needless to say, estimated residuals seldom satisfy such criteria and the residuals here are no exception as the estimated residual correlations in Table 7 show. The question is whether an orthogonalization of the residuals as in a 'structural VAR' analysis would make the results more structural. As residual correlation can have many reasons, such as omitted variables, orthogonalization is not necessarily a good solution. For example, extending our present data with prices of output, capital, and labor is very likely to change the residuals and, hence, the estimated 'structural' shock 11 .

Do the data have anything useful to say?
Some of the results presented in this section were more interpretable, others less so. Thus one might raise the question, as one referee did, whether at the end of the day DSGE modelling with interpretable theory is to be preferred to VAR modelling without theory. No doubt, many economists would be sympathetic to such a view. We shall, however, argue that the relevant issue is how theory enters empirical modelling and not whether one should choose between econometric modelling with or without theory. Hoover (2006) expresses this with admirable clarity: "The Walrasian approach is totalizing. Theory comes rst. Empirical reality must be theoretically articulated before it can be empirically observed. There is a sense that the Walrasian attitude is that to know anything, one must know everything. " .... "There is a fundamental problem: How do we come to our a priori knowledge? Most macroeconomists expect empirical evidence to be relevant to our understanding of the world. But if that evidence only can be viewed through totalizing a priori theory, then it cannot be used to revise the theory."..."The Marshallian approach is archaeological. We have some clues that a systematic structure lies behind the complexities of economic reality. The problem is how to lay this structure bare. To dig down to nd the foundations, modifying and adapting our theoretical understanding as new facts accumulate, becoming ever more con dent in out grasp of the super structure, but never quite sure that we have reached the lowest level of the structure." Peter Ireland has clearly followed the Walrasian approach in the sense of postulating from the outset what the relevant theory is. Our approach is clearly Marshallian (or rather post-Walrasian, Colander (2006)), in the following sense: we use the VAR model to structure the data and interpret the ndings at the background of not just one, but as many theories as needed. As the empirical analysis of this section demonstrated, the data chosen by PI is far from su cient to allows us to 'reach the super structure'. There are 'some clues suggesting that a systematic structure lies behind the complexities of the economic reality', but also that we have to dig a lot deeper before we can be more con dent in our conclusions.
Structuring the Ireland data by the cointegrated VAR produced the following general results: 1. The rst period was better explained by the chosen data than the second.

2.
A trend-stationary Cobb-Douglas production function with plausible coe cients seemed to work well in the rst but not in the second period.
3. The variable, hours worked, was found to be nonstationary rather than stationary against the assumption in PI.
4. There seemed to be two stochastic trends: one associated with permanent shocks to consumption the other to labour.
Some clues can be dug out from these results. We begin with the rst period.
1. The trend-stationarity of the Cobb-Douglas function suggests that TFP might be well approximated by a linear trend.
2. The long business cycles in GDP seem to have been associated with permanent shocks to consumption, rather than with shocks to TFP. Thus, they seem demand rather than supply driven.
3. The nding that the "additional " stochastic trend was mainly associated with permanent shocks to labor, might imply that 'labour augmented technological progress' has a stochastic (as well as deterministic) trend component.
Thus, the rst super cial digging in the rst period might suggest that (1) the linear trend captures both the linear growth in TFP and technological progress, (2) demand shocks have triggered the long business cycles together with (3) shocks to labor productivity/labor intensity.
The digging into the second period provides us with the following clues: 1. The present choice of variables is insu cient to adequately explain the variation in the data.
2. The long-run growth trend can no longer be approximated by a linear time trend.
3. Cumulated shocks to consumption and labor seem to have generated the long business cycles similarly as in the rst period. Thus, demand shocks and labor productivity shocks might, even in this period, have played an important role. Nonetheless, this conclusion is highly tentative as it is based on an econometrically unsatisfactory model.
An interesting question is what should replace the insigni cant linear trend in the second period. This prompts us to ask in what sense the two periods di er so as to explain the change in trend behavior. To us it seems obvious that the major di erence is to be found in the degree of globalization, worldwide capital deregulation and increasing international competitiveness. This suggests that the second period, in particular, should not be modelled in the context of a closed economy, prompting for an extension of the economic model and the data. The persistent movements of the two cointegration relations in the second period might suggest that the real exchange rate might do the job, as it has similarly exhibited a lot of persistence in the second period.
Thus, the results of this section provide a set of empirical ndings on the pulling and pushing forces of the chosen data. Structuring the data in this way o ers a number of`sophisticated' stylized facts that can guide us on the road towards empirically more relevant theory models. When asking the question 'What does the data tell when allowed to speak freely?' we have tried to dig out as much information as possible from this quite limited information set. The results were interpreted (admittedly in a rather loose way) at the background of many potentially important macroeconomic theories. This is clearly contrary to using just one theory but certainly not the same as using no theory. We do not claim that we now know the 'truth', but we do believe that the analysis has provided us with a number of clues that might be useful when deciding how and where to continue digging.

Concluding discussion
One aim of this paper was to propose a procedure for assessing economic models based on the VAR model. Another was to demonstrate the advantages of properly accounting for unit roots (near unit roots) in the data as a robusti cation of the statistical and economic inference. As a means to achieve these goals, we proposed that all basic assumptions underlying the theory model should be formulated as a set of testable hypotheses on the cointegration and common trends properties of the CVAR model. This was dubbed`a theory consistent hypothetical scenario' as it summarizes the main characteristics of the data to be satis ed for the theory model to have empirical content.
We derived a`theory consistent hypothetical scenario' for the DSGE model in PI and estimated a well-speci ed cointegrated VAR model to demonstrate that, in fact, most of the assumptions underlying the DSGE model were testable and that most of them were rejected. The story the data wanted to tell, when allowed, was in fact very di erent from the RBC story. For example, the observed business cycle uctuations seemed to originate from shocks to consumption and labor, rather than from shocks to technology or total factor productivity as assumed by the RBC model.
The assumption that the structural parameters of a theoretical model remain constant over time did not seem tenable with the information in the data 12 . Strong evidence of parameter non-constancy was detected by a number of recursive methods. As a result the sample was divided into two parts and a cointegrated VAR analysis performed for each subsample. This allowed us to compare similarities and di erences in the long-run relations and, in particular, in the adjustment dynamics due to changes in the main economic mechanisms between the two periods. We found that, independently of sample period, the basic RBC assumption that shocks to capital and TFP have generated the business cycles was rejected. Instead we found that it is the empirical shocks to consumption and labor that have generated the business cycles. This nding was robust in both sample periods.
An additional advantage of splitting the sample was that we were able to demonstrate that the income, consumption, labor, capital data did a reasonable job in 'explaining' business cycle movements in the rst period, but a less satisfactory one in the more recent period suggesting that some important information is missing. For example, the e ect of the increased globalization on US savings and investment decisions might be important in the more recent period.
But, when this is said, we nd it implausible that the empirical conclusions will remain unchanged when relaxing the ceteris paribus assumptions underlying the choice of data. This is because the results are likely to be highly sensitive to a number of simplifying assumptions extending outside the DSGE model in PI. For example, equating a residual with an autonomous shock can be very misleading unless the model contains all relevant variables. Also, equating an observed variable with the true variables of the theory model (Haavelmo, 1944) is often di cult to defend. Dividing all variables with population (>16 years) to obtain per capita determinants (because the theory model assumes homogeneous labor and constant preferences over time) can bias the results if preferences have changed and labor is not homogeneous. Exclusively analyzing real variables because the model assumes nominal and real separation can be misleading if nominal and real interaction e ects are strong in the data.
Such concerns are, however, easily met (though not yet in this paper) as it is straightforward to include population as an unrestricted variable in the CVAR and then test if it is excludable (which would be the case if the per capita assumption is correct) or to add a measure of nominal growth, say in ation rate, or the price of capital and labor. By gradually increasing the information set, it is possible to build on previous results (as the cointegration property is invariant to changes in the information set) and improve our understanding of how sensitive previous conclusions are to the ceteris paribus clause.
Thus, as long as we have not yet checked the robustness of the empirical results to the above points, we do not claim that our CVAR story is`structural' even though it has empirical content. This is contrary to the DSGE model in PI which tells a`structural' story, but with very little empirical content. The question is, whether looking at the complicated, dynamic, fast changing economic reality through the glasses of structured VAR rather than a highly stylized (and often empirically questionable) theoretical model, provides a more reliable way of gaining economic insight. In the second case there is a signi cant risk of overlooking signals in the data suggesting that other mechanisms are at work in the economy. The fact that the RBC assumption seemed to explain the data reasonably well in PI, despite the strong empirical rejection when the data were allowed to speak freely, suggests that conclusions from models based on strong economic priors and many untested assumptions might say more about the faith of the researcher than of the economic reality.
On the other hand, the VAR estimates can be, and often are, sensitive to all kind of things, in particular the speci cation of deterministic components and the choice of sample period. But, when making these speci cation choices it is mandatory to follow scienti cally valid principles 13 . For example, speci cation of deterministic components (constant, trend, dummies) have to be made as objectively as possible (and not as a means to in uence the results in a desired direction), the choice of sample period should de ne reasonably constant parameter regimes, etc. The advantage of such strict rules is that in the end one can claim that the VAR model represents the basic information in the data. Hence, empirical models that are inconsistent with the VAR results must then have imposed inadmissible restrictions on the data (Hendry and Mizon, 1993).
The Walrasian approach of assuming just one theory model and then de ne success as the ability to make the data conform to this model seems as a recipe for incorrect empirical analysis. No doubt, many questionable choices have been made in the ful llment of such endeavours. Ireland (2004) is probably just one example among numerous others.