## Abstract

We compare local and global polynomial solution methods for DSGE models with Epstein- Zin-Weil utility. We show that model implications for macroeconomic quantities are relatively invariant to choice of solution method but that a global method can yield substantial improvements for asset prices and welfare costs. The divergence in solution quality is highly dependent on parameters which affect value function sensitivity to TFP volatility, as well as the magnitude of TFP volatility itself. This problem is pronounced for calibrations at the extreme of those accepted in the asset pricing literature and disappears for more traditional macroeconomic parameterizations.

## 1 Introduction

This paper compares solution methods for production-based asset pricing models with recursive utility. In particular, we consider a standard neoclassical growth model with Epstein-Zin-Weil utility. Bansal and Yaron (2004) demonstrate the importance of recursive utility for explaining general equilibrium asset prices in an endowment economy with an exogenously specified aggregate consumption process. Subsequent extensions to production economies, e.g. Croce (2013), Campanale, Castro, and Clementi (2010), Kaltenbrunner and Lochstoer (2008), and Rudebusch and Swanson (2012), show similar promise in reconciling financial and quantity dynamics and are becoming increasingly common in the literature. Accordingly, it is important to understand the behavior of solution methods for these models, and our work contributes to the literature along that dimension.

We focus on two standard methods of solution for dynamic stochastic general equilibrium (DSGE) models: a global projection method using Chebyshev basis functions and a local perturbation method of various orders. Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) compare these solution methods for a neoclassical growth model with constant relative risk aversion utility, and demonstrate that both methods are broadly suitable. Our aim is to investigate the appropriateness of these solution methods when utility takes a more general, recursive form. Notably, we find that the two solution methods produce roughly equivalent implications for macroeconomic quantities, but can be very different with respect to asset prices and welfare costs. This discrepancy is exacerbated as the volatility of the total factor of productivity (TFP) is increased.

Several considerations drive our results. First, the value function corresponding to Epstein-Zin-Weil utility exhibits a high degree of curvature with respect to the TFP volatility; second, perturbation is a local Taylor approximation around the deterministic steady state, where the TFP volatility is zero; and third, in the presence of recursive utility, asset prices and welfare costs, not quantities, depend critically on the shape of the value function. The upshot is that for models with high output volatility, perturbation attempts to approximate a rapidly changing function at a locus (zero TFP volatility) that is far from the point of interest (high TFP volatility). Taylor’s Theorem shows that the resulting approximation error can be arbitrarily bad and can even increase with the order of approximation. Thus, when the TFP is calibrated to have high volatility, the resulting approximation of the value function obtained by perturbation can be very bad, yielding poor approximations for prices and welfare costs.

In addition to the direct effect of TFP volatility on approximation quality, we investigate the effect of other parameters. We find that the subjective discount factor, output growth and risk sensitivity are also important for solution accuracy, but in a more indirect manner: the solution methods diverge when these parameters increase the value function’s sensitivity to TFP volatility. We describe the interplay between these parameters and solution accuracy.

Our problem dictates that, at a minimum, we must approximate the value function and policy function of the endogenous choice variable (consumption). In theory, if our solutions are sufficiently accurate, we can use them to accurately approximate any other variable in the model, even if it depends on the value and policy functions in a nonlinear way. The reverse is also true – a poor solution for *either* the value or policy functions will contaminate approximations of other model quantities. However, in the case of perturbation, we find that computing a local approximation of asset prices directly can ameliorate their dependency on the (poorly approximated) value function and improve their accuracy. We show how this modification can bring the model’s asset pricing implications far closer to those produced by a global method. Unfortunately, the same result does not hold for welfare costs.

Pohl, Schmedders, and Wilms (2015) document similar findings in the context of a standard long-run risk endowment economy. In particular, they compare a local, log-linear solution with global projection methods and demonstrate that the log-linear solution gives rise to very large approximation errors. Caldara et al. (2012) compare perturbation and projection methods for a production economy with recursive utility and stochastic TFP volatility and find that local methods are very accurate and strike an excellent balance between computation time and accuracy. Their results, however, are narrowly applicable to macroeconomic (not asset price) dynamics because they explore a limited parameter space which simultaneously precludes the numerical problems we document as well as the ability to generate sufficient volatility for asset returns. Our work expands the parameter space so as to generate reasobale asset price dynamics and shows local approximations deteriorate in precisely this region. In a similar vein, Parra-Alvarez (2017) extend Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) to continuous-time models. Specifically, that work, which confines attention to separable utility, concludes that projection methods are superior to perturbation, but that the latter typically strike a good balance between accuracy and computational cost.

Recent literature shows that as economic models push the boundaries of complexity, global solution methods are a requisite. Fernández-Villaverde et al. (2015) document that perturbation methods are not adequate when dealing with non-linearities that are associated with a kink in interest rate policy. Likewise, Fernández-Villaverde and Levintal (2016) compare solution methods for a model with time-varying probabilities of rare disasters and Epstein-Zin-Weil utility and find that low-probability, high-impact events induce non-linearities that cannot be explained with high-order perturbation methods. Irarrazabal and Parra-Alvarez (2015) document similar findings for a continuous-time, time-varying rare disaster model with recursive utility.

When perturbation is accurate, it is often the preferred solution method since it is extremely fast and can be efficiently nested within an estimation framework. For the particular problem we consider, the projection and perturbation methods are essentially equivalent in computation time, as the model only entertains one state variable^{[1]}. However, if the complexity of the model is increased (more state variables added), a standard global approximation method would quickly deteriorate in computing time due to the curse of dimensionality. For this reason, much of the recent literature in computational economics has been devoted to improving global methods for high-dimensional problems. Judd, Maliar, and Maliar (2011) introduce a simulation-based solution method for DSGE models that ameliorates the curse of dimensionality by focusing on the ergodic region of the solution space. Maliar and Maliar (2015) further improve the aforementioned simulation method by employing a clustering algorithm to reduce the dimensionality of the ergodic space. Judd et al. (2014) and Brumm and Scheidegger (2015) make a number of advances for Smolyak sparse grids, which reduce the sensitivity of global approximation methods to the dimensionality of the model parameter space. Finally, Cai, Judd, and Steinbuks (2015) and Levintal (2016) deploy entirely new approaches to arrive at global approximations for high-dimensional problems. The former, the non-linear certainty equivalent method, is suitable for DSGE models with up to 400 state variables; the latter, “Taylor projection”, exploits the efficiency of Newton’s method to quickly arrive at global approximations.

The results of this paper suggest that when using perturbation for production-based asset pricing models, it would be wise to compare the results with a global solution method. In particular, for models with a high calibration of TFP volatility, a global method, while computationally more burdensome, is more suitable since it yields a superior approximant to the value function.

This paper proceeds as follows. Section 2 describes our model in detail, Section 3 outlines the solution methods applied to the model, Section 4 reports the results of the two solution methods in addition to diagnostics that explore their accuracy and Section 5 concludes.

## 2 Model

We follow Kaltenbrunner and Lochstoer (2008) in specifying a basic neoclassical growth model with Epstein-Zin-Weil utility. As mentioned in the previous section, this model is very similar to that of Croce (2013), the primary difference being that Croce (2013) explicitly specifies a long-run risk component in the total factor of productivity process. While Croce (2013) does a better job of explaining both quantity and price dynamics, and while the model of Kaltenbrunner and Lochstoer (2008) exhibits several deficiencies with respect to asset prices, we choose the latter specification for two reasons. First, it is a more simple generalization of the neoclassical growth model, obtained by substituting Epstein-Zin-Weil utility for power utility, and nests the more standard model. Second, through appropriate normalization, the model has only one state variable. This simplicity allows us to emphasize the computational results with greater ease. We anticipate that the results will extend to more complicated, and perhaps appealing, situations.

### 2.1 Preferences

Our economy admits a representative agent whose utility function follows Epstein and Zin (1989) and Weil (1990):

where 0 < *β* < 1 is the subjective discount factor, 𝔼* _{t}* is the conditional expectations operator,

*C*denotes aggregate consumption,

_{t}*γ*denotes the agent’s coefficient of relative risk aversion,

*ψ*denotes the agent’s inter-temporal elasticity of substitution (IES), and

As shown in Epstein and Zin (1989) and Weil (1989), the log of the stochastic discount factor, *m _{t+}*

_{1}, for these preferences is

where Δ*c _{t+}*

_{1}denotes log consumption growth and

*r*

_{a,t+}_{1}denotes the log gross return on the aggregate wealth portfolio. Of particular importance is the presence of

*r*

_{a,t+}_{1}in the specification of the discount factor, which makes innovations to expected consumption growth a priced risk factor; in the standard model with CRRA utility,

*θ*= 1 and the last term disappears. It is this feature of Epstein-Zin-Weil utility that makes agents concerned about shocks to expected future consumption growth and that allows us to amplify the equity premium (Mehra and Prescott, 1985) while evading the risk-free rate puzzle (Weil (1989)).

### 2.2 Technology

A single firm owns the capital stock and produces a consumption good via Cobb-Douglas technology, using labor and capital as inputs:

where *Z _{t}* is the labor-augmenting stochastic technology level,

*H*denotes the number of hours worked,

_{t}*K*represents capital and

_{t}*α*is the share of capital in the production function. Under the assumption that

*t*, since

*H*does not appear in the utility function. Normalizing

_{t}The log technology process,

We limit ourselves to the special case of *φ* = 1, since this allows us to retain only one state variable; our computational results are similar for the case of persistent, yet trend stationary *z _{t}* (when |

*φ*| < 1). Hence, the log of TFP is a random walk with drift parameter

*μ*, and in this special case shocks to technology are permanent.

### 2.3 Capital accumulation

Following Jermann (1998), we allow capital adjustment costs in the accumulation equation

where

is an increasing, concave function which induces large changes in the capital stock to be more costly than successive small changes. The parameter *ξ* governs the degree of concavity and has the desirable feature that as *ξ* → ∞, *ϕ*(*x*) becomes the identity function (with an appropriate specification of *α*_{1} and *α*_{2}); that is, capital adjustment costs disappear. At the other extreme, as *ξ* → 0, *I _{t}* → 0, ∀

*t*, and

*μ*). Hence, for intermediate values, the adjustment cost parameter,

*ξ*, allows us flexibility in matching the relative volatilities of consumption and output. As mentioned above, the remaining parameters are defined so as to eliminate adjustment costs in the deterministic steady state:

### 2.4 Equilibrium

In equilibrium, the aggregate resource constraint is binding:

In this basic environment, the welfare theorems are satisfied and the solution to the social planner’s problem yields the same allocations as a competitive equilibrium. The planner’s problem is

subject to

By solving the first-order conditions, we obtain the intertemporal Euler equation

where

is an alternative expression for the Epstein-Zin-Weil stochastic discount factor, equivalent to

which is the term scaling the stochastic discount factor in Equation (13). Hence, the Euler equation can be written compactly as

To preserve stationarity in the economy, we normalize all variables by the level of the contemporaneous technology process:

Since the preferences represented by Equation (1) are homothetic (Epstein and Zin, 1989), the normalized system of equilibrium conditions can then be expressed as

where

Augmenting Equations (17)–(20) with appropriately scaled versions of Equations (14), (3), and (9) yields the system of equations that we solve with the methods of the following section.

## 3 Solution methods

We now describe the two methods we use to solve the model of the previous section.

### 3.1 Perturbation

Perturbation methods, suggested for economic models by Judd and Guu (1997) and Judd (1998), and widely popularized by Schmitt-Grohé and Uribe (2004), build an asymptotically valid polynomial approximation of a function around a point where the solution is known. In general notation, perturbation seeks a local approximation to a function, *F*, where

and where *F*(*x*(0), 0) is known. The typical specialization in economics is for *F* to represent a system of nonlinear stochastic difference equations,

where the deterministic steady state, *f* is a system of equations including the inter-temporal Euler equation and constraints, and where the polynomial approximation to *f* is a Taylor expansion. However, there is no *a priori* reason to restrict our attention to the inter-temporal Euler equation; since we are interested in computing financial moments and since bond prices in a recursive utility model depend on the value function, it is natural for us to approximate the value function directly. Judd and Guu (1997) and Judd (1998) are early examples of using the value function to generate perturbation conditions. Our particular solution method utilizes both the value function and the intertemporal Euler equation.

We use system (17)–(19) to build approximations of the value and policy functions:

where

and

To obtain these approximations, we take successive derivatives of Equations (17) and (18) with respect to _{t} and *σ*_{z}, and evaluate the resulting systems of equations at the deterministic steady state to obtain closed form solutions for the coefficients in Equations (25) and (26).

For example, evaluating Equations (17)–(20) at the deterministic steady state is sufficient to determine _{t} and *σ*_{z}) of Equations (17) and (18) and again evaluating at the deterministic steady state allows us to solve for

With approximations _{t} and _{t}. However, we can also approximate other variables of interest by augmenting system (17)–(19) with additional equilibrium conditions. In our case, we are interested in approximating both the risk-free rate and

Adding Equations (27) and (28) to system (17)–(19) allows us to obtain approximations

as outlined above.

### 3.2 Projection

Similar to perturbation, projection methods seek a polynomial approximation to a function, *F*, as in Equation (21), or more commonly to the special case, *f*, as in Equation (22). However, rather than using the known solution at *ε* = 0 to construct a local approximation, we specify a polynomial expansion, *x*. As before, for the neoclassical growth model, *f* would be comprised of the inter-temporal Euler equation and constraints, and a projection solution would specify a polynomial expansion of the consumption policy that would minimize the Euler equation error.

The analogous approach to our problem would be to use Equations (17) and (18) to obtain approximations,

and

where *M* is the order of approximation and *φ*_{j}, *M*, we could specify a grid of *N* ≥ *M* + 1 points for *N* equations in 2(*M* + 1) unknowns. We could then use a nonlinear solution method to find the coefficients *a _{j}* and

*b*for

_{j}*j*= 1, …,

*M*, in Equations (31) and (32), that best satisfy (17) and (18).

As there is no theorem to guarantee convergence of the preceding approach, we follow an alternative methodology, suggested by Campanale, Castro, and Clementi (2010) and Kaltenbrunner and Lochstoer (2008), which is to couple polynomial approximations of the value and policy functions with value function iteration. Specifically, we seek a polynomial approximation to the value function as in Equation (31). Letting *N _{k}* ≥

*M*, we specify a (not necessarily equally spaced) grid for

*ε*to the order

*N*Gauss-Hermite abscissae. To ease notation, we suppress time subscripts, collect the

_{ε}*N*values of

_{k}## 4 Results

We now apply the solution methods outlined in the previous section to the model of Section 2 and state the main result of our paper: while the quantity dynamics of the two methods are essentially equivalent for a variety of parameter values, the same is not true of variables that are tightly linked to the value function, such as asset prices and welfare costs. We discuss the reasons for this result and outline a very simple motivating example that provides intuition for the particular problem we consider. We conclude the section by reporting diagnostics which compare the accuracy of the solution methods.

### 4.1 Calibration

We fix several parameters of our model and report them in Table 1. These values are a widely accepted parameterization of the US economy in the literature; in particular, the depreciation rate and share of capital, *δ* and *α*, respectively, are identical to those of Jermann (1998). The quarterly growth rate, *μ*, implies annual growth of 1.6 percent and the intertemporal elasticity of substitution (IES) parameter, *ψ*, is set in the middle of the range (1,2] advocated by Bansal, Kiku, and Yaron (2007). In reality, we considered alternative values of *μ* and *ψ* but do not report the corresponding solutions and simulations as they do not alter the qualitative nature of our results. Finally, the adjustment cost parameter, *ξ*, was chosen so that the ratio of volatilities of log consumption growth to log output growth matches empirical estimates (in the vicinity of 0.5), which depend on the time period and frequency of the data (see discussion below).

α | δ | ψ | μ | ξ |
---|---|---|---|---|

0.36 | 0.025 | 1.5 | 0.004 | 13 |

To understand our parameterization of the TFP volatility, it is instructive to consider the data moments reported in Table 2. The table contains means and volatilities for GDP, aggregate consumption and the 90 T-bill, both at annual and quarterly frequencies, for several sample periods. We highlight two important features of the data. First, the volatility of log output growth is markedly different between pre-war and post-war samples, the former being roughly 2.5 to 3 times as great as the latter. Second, the mean of the risk-free rate increases and its volatility decreases as the time horizon is curtailed to include fewer years. In the case of the mean, values in later samples are up to twice as large as that of the pre-war sample.

1929–2008 (A) | 1950–2008 (A) | 1950–2008 (Q) | 1960–2008 (Q) | 1970–2008 (Q) | |
---|---|---|---|---|---|

Std(Δc) | 0.0108 | 0.00557 | 0.00490 | 0.00451 | 0.00429 |

Std(Δy) | 0.0246 | 0.0108 | 0.00980 | 0.00857 | 0.00840 |

Std(Δc)/Std(Δy) | 0.439 | 0.518 | 0.500 | 0.526 | 0.510 |

Mean (r)^{f} | 0.00847 | 0.0146 | 0.0146 | 0.0173 | 0.0168 |

Std(r)^{f} | 0.0119 | 0.00747 | 0.00747 | 0.00723 | 0.00801 |

“A” denotes annual frequency and “Q” denotes quarterly frequency. Quarterly samples begin with the first quarter of the stated year and end with the final quarter of 2008. “

*c*” and “*y*” denote the log of real consumption (nondurables plus services) and GDP, respectively, and are obtained from NIPA Table 1.1.4–1.1.6, with annual values scaled to quarterly for comparison. “*r*” denotes the net return on the 90 T-bill, obtained from CRSP (monthly frequency for all horizons), converted to real by subtracting the 12 month lagged moving average of CPI return (as a forecast of expected inflation). The risk-free is annualized by a simple scale factor._{f}

As a result of the variance in sample moments across sub-periods, we observe a wide range of calibrated values for the TFP volatility, *σ*_{z}, and the discount factor, *β*, in the literature. Since *σ*_{z} than models which calibrate to pre-war, annual data. Hence, we allow

For the remaining parameters, the subjective discount factor, *β*, and coefficient of relative risk aversion, *γ*, we entertain *β* ∈ [0.980, 0.998] and *γ* ∈ {2, 5, 10}. We choose these values because they not only encompass accepted values in the literature, but they allow a broad enough range of parameterizations to investigate their effect on the sensitivity of the value function to *σ*_{z}. In general, we are primarily concerned with *β* > 0.99, as these higher values are requisite for matching moments of the risk-free asset.

### 4.2 Model implications

#### 4.2.1 Low volatility

Table 3 reports simulation results for both projection and perturbation methods when *σ*_{z} = {0.01, 0.02} and when *γ* = 5. We set *β* = 0.998, which simultaneously yields a risk-free rate in the neighborhood of those observed in post-war data and that allows us to closely approximate the historical pre-war risk-free rate of 0.00847 (see Table 4) when *σ*_{z} > 0.02. We use fifth order Cheybshev polynomials for projection and third order Taylor expansions in the case of perturbation – for the projection method, higher order approximations make little material difference to the stated results, and for the perturbation method, numerical instabilities lead to potentially greater discrepancies than those reported^{[2]}. Finally, moments are computed by simulating 100,000 quarterly observations and then aggregating financial variables to an annual frequency by a simple scale factor.

σ_{z} = 0.01 | σ_{z} = 0.02 | |||||
---|---|---|---|---|---|---|

Proj | Pert | NPert | Proj | Pert | NPert | |

Std(Δc) | 0.00353 | 0.00352 | 0.00352 | 0.00704 | 0.00702 | 0.00702 |

Std(Δy) | 0.00643 | 0.00643 | 0.00643 | 0.0129 | 0.0129 | 0.0129 |

Std(Δc)/Std(Δy) | 0.549 | 0.549 | 0.549 | 0.548 | 0.546 | 0.546 |

Std(Δi)/Std(Δy) | 1.85 | 1.85 | 1.85 | 1.84 | 1.84 | 1.84 |

0.0182 | 0.0181 | 0.0190 | 0.0163 | 0.0161 | NaN | |

Std(R)_{f} | 0.00116 | 0.00115 | 0.00114 | 0.00232 | 0.00229 | NaN |

0.0000821 | 0.000213 | −0.000658 | 0.000653 | 0.000845 | NaN | |

Std(R − ^{E}R)_{f} | 0.00221 | 0.00221 | 0.00221 | 0.00440 | 0.00440 | NaN |

SR (R)^{E} | 0.0371 | 0.0964 | −0.297 | 0.148 | 0.192 | NaN |

3.01 | 3.00 | 2.94 | 2.31 | 2.12 | NaN |

Simulations are quarterly and financial moments are annualized.

We begin by considering the projection results in Table 3. Setting *σ*_{z} = 0.01 results in an output volatility slightly lower than observed in quarterly data and *ξ* allows us to fix the ratio Std(Δ*c*)/Std(Δ*y*). Hence, it is not surprising that the standard deviations of consumption and output are not drastically different than their counterparts in the data. The remaining moments are freely determined, and in some cases are quite different from observed values. In particular, the equity premium and its volatility are extremely low and the volatility of the risk-free is about six to seven times smaller than what would be expected in the data. In fact, Kaltenbrunner and Lochstoer (2008) find that while holding the Sharpe ratio of equity fixed, there is a trade-off in matching the mean and variance of the equity asset and the volatility of the risk-free. We emphasize that a simple modification to the model à la Croce (2013) (explicitly parameterizing a time varying growth rate in the TFP process) can rectify some of these issues. However, in order to highlight our computational results, we favor parsimony and forsake the additional state variable.

The remaining columns of Table 3 report simulation results for perturbation, for both the case where *σ*_{z}, we see that a third order perturbation yields quantity dynamics that are almost identical to those of projection. The same is not true of asset pricing moments and *σ*_{z} = 0.01, both variants of the perturbation method generate simulated moments that are in close agreement with projection, the one exception being the equity premium, which is extremely close to zero in all cases. However, increasing the TFP volatility to *σ*_{z} = 0.02 renders the nonlinear perturbation unable to compute asset prices and _{t} and _{t} were good enough, any nonlinear function of them would also yield a highly accurate approximation. However, as we will see below, perturbation delivers a poor solution of _{t} when *σ*_{z} is high. The result is that a direct local approximation of

#### 4.2.2 High volatility

Asset pricing papers that match moments of annual, pre-war data sets are too numerous to cite. Table 4 reports simulation results for both projection and perturbation methods when *σ*_{z} = {0.03, 0.04}, the latter being a value that conforms to pre-war, annual parameterizations. As before, *γ* = 5 and *β* = 0.998.

σ_{z} = 0.03 | σ_{z} = 0.04 | |||||
---|---|---|---|---|---|---|

Proj | Pert | NPert | Proj | Pert | NPert | |

Std(Δc) | 0.0105 | 0.0105 | 0.0105 | 0.0140 | 0.0138 | 0.0138 |

Std(Δy) | 0.0193 | 0.0193 | 0.0193 | 0.0257 | 0.0257 | 0.0257 |

Std(Δc)/Std(Δy) | 0.547 | 0.543 | 0.543 | 0.543 | 0.537 | 0.537 |

Std(Δi)/Std(Δy) | 1.82 | 1.83 | 1.83 | 1.80 | 1.81 | 1.81 |

0.0130 | 0.0127 | NaN | 0.00847 | 0.00779 | NaN | |

Std(Rf) | 0.00345 | 0.00343 | NaN | 0.00455 | 0.00468 | NaN |

0.00166 | 0.00195 | NaN | 0.00299 | 0.00370 | NaN | |

Std(R − ^{E}R)_{f} | 0.00651 | 0.00654 | NaN | 0.00855 | 0.00860 | NaN |

SR (R)^{E} | 0.254 | 0.299 | NaN | 0.350 | 0.430 | NaN |

1.44 | 0.663 | NaN | 0.561 | −1.38 | NaN |

Simulations are quarterly and financial moments are annualized.

The previous discrepancies now become exaggerated: while the global method and both variations of the local method show high agreement for quantity dynamics, solutions for asset moments and *σ*_{z} increases. As before, nonlinear perturbation is unable to compute asset prices and *σ*_{z}, but the direct local approximations ameliorate the problem. However, in the most extreme case of *σ*_{z} = 0.04, even the direct perturbation and projection show moderate discrepancies for asset prices, and for both values of *σ*_{z} the means of ^{[3]}

The inability of the local method to approximate *σ*_{z} = 0.03 to *σ*_{z} = 0.04 results in a welfare loss of 1.44 − 0.561 = 0.879, while the analogous computation due to perturbation is 0.663 + 1.38 = 2.04 – more than twice the value of the global method.

We will see that the findings in this section are a result of the fact that perturbation is a local approximation around the deterministic steady state (*σ*_{z} = 0), and that the value function exhibits a high degree of curvature in the direction of *σ*_{z}. For this reason, perturbation has difficulty achieving an accurate approximation as the calibrated value of *σ*_{z} moves away from zero. Further, although we do not consider values *σ*_{z} > 0.04, we note that projection methods will never result in negative approximations (and hence, explosive paths) of the value function, as does the perturbation, for any value of *σ*_{z}, since they are able to maintain tight control of the radius of convergence. This point is emphasized by Den Haan and de Wind (2009).

### 4.3 Graphical evidence

We now provide graphical evidence to support the model solutions of Section 4.2. Figure 1 shows policy function approximations for 5th order Chebyshev projection and perturbation of orders 1, 2 and 3, all for the case of *σ*_{z} = 0.04 and *β* = 0.998 (for _{t} and _{t}, the “Pert” and “NPert” solutions are identical). Figure 2 and Figure 3 depict similar approximations for the value and bond price functions.

These plots clarify the results reported in Table 4: both projection and perturbation produce policy functions that are in close agreement, while there is a wide discrepancy in their solutions for the value and bond price functions, even among the different order perturbations. Since the quantity dynamics of the solutions are not sensitive to the value function, it is not surprising that the simulated macroeconomic moments of the two methods do not differ by a great amount. However, the risk-free rate and *σ*_{z} = 0.01, and demonstrate that when the TFP volatility is low, the solution methods are more similar, as expected from the simulation output in Table 3. In this case, the consumption policy approximations overlap to an even greater extent, the value function approximations are separated by only a (relatively) small level shift and the bond price functions are shifted closer together and also overlap more. The discrepancies in the value and bond price functions further diminish as we shrink *σ*_{z} toward zero.

We note that in our model it is possible to analytically solve for the return on equity in terms of aggregate variables (see C), and hence it is unaffected by poor local approximations of the value function. It follows that local approximations of the equity premium are only affected by the value function via the risk-free rate. This result may not extrapolate to more general models where analytical expressions of the return on equity are not available.

To understand why the value functions for the two solution methods diverge for large *σ*_{z}, it is useful to think of the value function and consumption policy as functions of both the state variable, *and* the TFP volatility parameter, *σ*_{z}. Figure 7–Figure 9 show policy, value and bond price function approximations for 5th order Chebyshev projection and 3rd order perturbation, when *σ*_{z} ∈ [0, 0.04]. Thus, the approximations in Figure 1–Figure 6 are simply cross sections (fixing *σ*_{z}) of the functions depicted in Figure 7–Figure 9; e.g. the value function in panel (a) of Figure 2 corresponds to the cross-section of Figure 8 at *σ*_{z} = 0.04. It becomes apparent from inspecting these surfaces that the value function exhibits a high degree of curvature in the direction of *σ*_{z}, with the amount of curvature increasing as *σ*_{z} approaches zero, whereas the consumption policy is quite flat. As a result, similar to the square root function considered in 53, we anticipate that the radius of convergence of a Taylor polynomial approximation of the value function will diminish for approximations centered at points very close to *σ*_{z} = 0. This is a exactly what a perturbation solution is: a local Taylor approximation *at**σ*_{z} = 0. It follows that, for the particular case of the value function, we have no guarantee of convergence for values of *σ*_{z} far away from zero, and in fact we should not be surprised to see divergent behavior, as suggested by the previous plots. This result is congruent with Den Haan and de Wind (2009) who find that nonlinearities in DSGE models can render high-order perturbation solutions that are explosive. On the other hand, the radius of convergence for the consumption policy is likely to be quite large, and we expect local Taylor approximations to converge for a wide range of *σ*_{z} – this is corroborated by Figure 7, where we are unable to distinguish the two surfaces. We are careful to note, however, that due to the nonlinearity and numerical difficulty of the problem, it is not possible to derive the true radius of convergence, and hence we cannot be certain that it is the cause of the large numerical discrepancies in the local approximation; rather, we simply use the nonlinearities of the approximated surfaces as a guide, suggesting the radius of convergence might be responsible.

The upshot of the foregoing results is that a global projection method is more robust to value function curvature, since it seeks to minimize an error equation expressed in terms of the true (unknown) value function, rather than approximating the truth at a distant focal point.

### 4.4 Solution evaluation and sensitivity analysis

In the preceding analysis we have merely shown some conditions under which the two solution methods we consider are different; we have not formally investigated their relative accuracy. We now undertake the important task of determining which of the solutions is a closer approximation to the unknown truth and do so for a variety of model parameter values. While our primary evaluation criterion will be Euler equation errors, we will conclude the section with a discussion of the Den-Haan-Marcet statistic (Den Haan and Marcet, 1994).

#### 4.4.1 Pricing errors

The fundamental asset pricing equation is *M _{t+}*

_{1}is the time

*t*stochastic discount factor, and

*R*

_{t+}_{1}is the return for any asset between

*t*and

*t*+ 1. Thus, from Equation (18) we have

Since the stochastic discount factor *M _{t+}*

_{1}incorporates current consumption,

_{t}, in its denominator [see Equation (14)], we can interpret pricing errors as a fraction of contemporaneous consumption. As suggested by Judd and Guu (1997), Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006), and Caldara et al. (2012), base 10 logarithms of pricing errors in Equation (33) can be interpreted in the following manner: a value of −1 corresponds to a 10% consumption error, a value of −2 corresponds to a 1% consumption error, a value of −3 corresponds to a 0.1% consumption error, etc.. Combining Equation (33) with the long simulations of

*ε*

_{t}[see Equation (20)] used in Section 4.2, we can compute the mean of the pricing errors implied by the model, for each solution method. The expectation is approximated by a Gauss-Hermite quadrature rule, with the order chosen so as to exactly compute the integral for the finite polynomial solutions. The pricing errors are reported graphically in the upper rows of Figure 10–Figure 13. The individual plots depict how mean Euler equation errors vary with

*β*, where

*β*∈ [0.980, 0.998] – in general, perturbation solution quality degrades as

*β*rises. Moving across the upper rows, from left to right, we are then able to observe the effect of increasing risk aversion,

*γ*, and moving between the four figures we observe the effect of increasing TFP volatility,

*σ*

_{z}– as with

*β*, the quality of the perturbation solution degrades as each of these parameters increases. At the lower extreme, in Figure 10, when

*σ*

_{z}= 0.01 and

*γ*= 2, a 3rd order perturbation dominates a 5th order projection for virtually all values of

*β*that we consider. However, both solutions produce errors that most would consider economically insignificant (less than 0.01% of consumption). Holding

*σ*

_{z}fixed and increasing

*γ*, the perturbation errors rise to levels as high as 1% of consumption, for high values of

*β*. These qualitative results become more pronounced in Figure 11–Figure 13, where at the upper extreme (

*σ*

_{z}= 0.04 and

*γ*= 10), perturbation errors exceed 10% of consumption, for high values of

*β*. It is this final case that deserves particular attention: models that calibrate to annual, pre-war data generally require high values of

*σ*

_{z}(on the order of 0.04) and

*β*(on the order of 0.998 or above) in order to match output volatility and the level of the risk-free rate. We see that these calibrations, matched with moderate levels of risk aversion (above 5) can lead to poor local approximations. On the other hand, models that calibrate to quarterly, post-war data typically obtain much smaller values of

*σ*

_{z}(on the order of 0.01 or below), and do not suffer from poor local approximations.

The fundamental characteristic driving these results is the curvature of the value function with respect to TFP volatility: as shown in Figure 8, the value function can exhibit a high degree of curvature in the direction of *σ*_{z}. In cases where the curvature is extreme, a local method such as perturbation will have difficulty approximating the function at points far from the deterministic steady state (the locus of approximation), a result which is corroborated by Figure 10–Figure 13. This is especially relevant for models that require a high calibrated value of *σ*_{z}. The parameters *γ* and *β* have an effect insofar as they increase the sensitivity of the value function to changes in *σ*_{z}; i.e. the sensitivity increases with each of these parameters.

The second and third rows of Figure 10–Figure 13 depict the mean of *β*, *γ* and *σ*_{z} rise. This discrepancy is most pronounced for *σ*_{z} = 0.04, *γ* ≥ 5 and *β* ≥ 0.99. On the other hand, the direct perturbation computation appears to be quite similar to projection for all parameter values. In truth, the local method exhibits a small amount of divergent behavior as well, but the graphical evidence is washed out by the scale of the nonlinear deviation. The moments in Table 3 and Table 4 give an idea of the magnitude of divergence.

The reason for the discrepancy in the risk-free rate computations is the same as for the Euler equation errors: for high values of *β*, *γ* and *σ*_{z}, perturbation provides a poor approximation to the value function. Since the risk-free rate depends directly on the value function in models with recursive utility [see Equations (27) and (14)], it is likewise poorly approximated by perturbation, insofar as the value function approximation is poor. This effect is most severe when we compute the risk-free fate nonlinearly. Viewed from the opposite perspective, if our approximations for _{t} and _{t} were highly accurate, a nonlinear computation of _{t} for high *β*, *γ* and *σ*_{z}. The interesting aspect of our results, though, is that we can ameliorate the effect of the value function by directly computing the risk-free rate via a Taylor expansion. This latter method anchors the risk-free at its deterministic steady state and weakens the computational relationship between _{t}.

The final rows of Figure 10–Figure 13 lend more insight to the foregoing results. As mentioned in Section 4.2, we include the log of *β*, *γ* and *σ*_{z} increase, which we attribute to the poor local approximation of the value function. However, in the case of *σ*_{z} (i.e. welfare effects are evaluated for a variable other than *σ*_{z}). However, since the approximations of _{t} are likely to be very different for different values *σ*_{z} (it will be well approximated for low volatility and badly approximated for high volatility), we expect that welfare computations attributed to TFP volatility will be very poorly approximated.

As a final note, we repeated all of the previous analysis for *μ* = 0 and *ψ* = 0.5; these changes only caused slight shifts, leaving the results above qualitatively the same. For space considerations, we do not include them. We recognize, however, the particular interplay between *β* and *μ*: lowering the growth rate increases the model’s ability to tolerate high values of *β* before local approximations become very poor. That is, we can think of the model as depending on the single growth adjusted subjective discount factor *μ* or *β* effectively decreases *β ^{*}*, and hence the model’s sensitivity to

*σ*

_{z}.

#### 4.4.2 Den-Haan-Marcet statistic

Since agents in our model have rational expectations, the residual of the pricing equation,

should not be in the time *t* information set. That is, under the null hypothesis that we have correct solutions for the value and policy functions, *β* = 0 in regressions of the form

for *t* = 1, 2, … , *T*, where *x _{i,t}* represent variables in the time

*t*information set. Den Haan and Marcet (1994) suggest testing this hypothesis by constructing a Wald-type statistic

where *x _{t}* is the vector of time

*t*regressors,

*x*

_{1,}

_{t},

*x*

_{2,}

_{t}, …,

*x*,

_{n,t}*X*is the matrix with rows

*T*will force a rejection of the test. To account for this, Den Haan and Marcet (1994) compute

*DM*(

*n*) for multiple simulations of

*u*and determine the proportion of times that the statistic falls within certain critical limits of the

*χ*

^{2}(

*n*) distribution. If the approximate solutions are good, the proportions within these bounds should be close to the actual area under the

*χ*

^{2}(

*n*) density function.

In our implementation of the Den Haan-Marcet statistic, we regress the price residuals on a constant and five lags of both log consumption growth and log productivity growth (hence, *n* = 11). We fix *β* = 0.998 and simulate 500 data sets of *T* = 3000 quarterly observations (750 years of data) and report the proportion of time that the value from Equation (36) is above or below the 5% points of the *χ*^{2}(11) density in Table 5. This Wald-type diagnostic corroborates the main result of the paper: in all cases the global Chebyshev projection method provides a very accurate solution to the model, whereas a high-order perturbation is only adequate for small values of the TFP volatility or where other model parameters (such as *γ*) eliminate the sensitivity of the value function to *σ*_{z}.

σ_{z} = 0.01 | σ_{z} = 0.04 | |||||
---|---|---|---|---|---|---|

γ = 2 | γ = 5 | γ = 10 | γ = 2 | γ = 5 | γ = 10 | |

Projection | (0.052, 0.054) | (0.052, 0.052) | (0.050, 0.052) | (0.052, 0.056) | (0.054, 0.062) | (0.050, 0.07) |

Perturbation | (0.058, 0.052) | (0.050, 0.058) | (0.006, 0.338) | (0, 0) | (0, 0) | (0, 0) |

The numbers in the parentheses represent the proportion of times the statistic was below and above, respectively, the 5% and 95% percent points of the

*χ*^{2}(11) density. In all cases,*β*= 0.998.

## 5 Conclusion

We have shown that the choice of solution method can be critical for production-based asset pricing models with recursive utility. In particular, local perturbation methods are inadequate for such models concerned with asset prices and welfare costs when TFP volatility is calibrated at high levels and when the risk aversion and the discount factor parameters are sufficiently high to make the value function very sensitive to TFP volatility. A global projection method, on the other hand, does quite well under a variety of circumstances. The reason for this result is that the value function in our model is highly curved in the direction of TFP volatility, *σ*_{z}. In fact, the degree of curvature is high enough that a local Taylor approximation of the value function is only suitable over a very small region around the point of approximation. Since perturbation is equivalent to a local Taylor expansion around the deterministic steady state (*σ*_{z} = 0), we find that in certain cases the resulting solution diverges for large *σ*_{z}, even for high-order approximations. A global approximation method, however, such as Chebyshev projection, is not susceptible to these issues, since it seeks to minimize an error function at any desired level of the TFP volatility.

We show that the parameter choice and model calibration is pivotal to the results above. For models that calibrate to quarterly, post-war data, typical values of *σ*_{z} and *β* are low enough to eliminate value function sensitivity to TFP volatility, rendering perturbation solutions perfectly acceptable. Caldara et al. (2012) compare perturbation and projection methods for such parameterizations and demonstrate that both are adequate. Our results diverge from those of Caldara et al. (2012) when we consider parameter values that are relevant for models which calibrate to annual, pre-war data. For these latter parameterizations, the quality of high-order perturbation methods degrade.

Local approximations of asset prices can be improved by augmenting the system of perturbation conditions and directly computing expansions for the risk-free rate. This method of approximation weakens the dependency of bond prices on the poorly approximated value function, and results in a more accurate solution. The same is less true of welfare costs: while very small improvements are also observed from direct computations, they are not nearly as striking.

Although we have not reported the stochastic steady state distribution of capital due to space considerations, as with many macroeconomic models, we find that the distribution is quite far from the deterministic steady-state value, *σ*_{z}, *it is relatively linear* in the direction of the *σ*_{z} are not possible, as analytic expressions for the derivatives of the perturbation system can only be found at the deterministic steady state (*σ*_{z} = 0).

In contrast to the value function, the consumption policy is relatively linear in the direction of *σ*_{z} and more curved in the direction of capital. As a result, a local Taylor approximation converges over a large range of values of *σ*_{z}, and perturbation has little difficulty in providing good approximations of the endogenous choice variable. For this reason, our results would have little bearing on the practitioner who is only interested in macroeconomic quantities: since quantity dynamics are not sensitive to the value function, perturbation delivers adequate solutions, even for fairly large values of *σ*_{z}. Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) only consider quantity dynamics and find that perturbation is competitive with global solution methods.

Our results are important for individuals who are jointly interested in quantity dynamics and other variables such as asset prices and welfare costs. Since asset prices in a recursive utility model depend crucially on the value function, our choice of solution method has an important impact on their moments (risk-free rates, risk premia, their volatilities, etc.) insofar as the method improves the value function approximation. The same is true of other variables that are tightly linked to the value function, such as welfare costs. While we don’t emphasize our particular model as a solution to the joint problem of matching macroeconomic and asset pricing data, we feel that extensions of the model have great potential, and that the problems we have uncovered are likely to be present in other production-based models with recursive utility.

Our general caution is for practitioners to be aware of the potential disadvantages of a local approximation method and, when feasible, to compare it to a global method to ensure adequacy. While we find that Chebyshev projection is competitive with perturbation in terms of computing time for the case of a single state variable, such is not likely to be true of models with many more state variables; as the number of variables increases, a global method will suffer from the curse of dimensionality. In cases such as these (see, for example, Rudebusch and Swanson (2012)) perturbation has the benefit of computational simplicity and, hence, is a natural candidate for an estimation procedure. However, for models where perturbation cannot adequately approximate the value function, and where financial moments or welfare costs are of interest, no degree of computational simplicity can compensate for an incorrect solution. For this reason, we suggest using perturbation in cases where solution adequacy can be verified against a more robust benchmark.

## A Projection algorithm pseudocode

Our projection algorithm proceeds in the following manner:

1: | Set τ = 0.00000001, Δ = 1, l = 0 and a^{0} = 1. |

2: | while Δ > τdo |

3: | fori = 1 to N_{k}do |

4: | Solve |

5: | forj = 1 to N_{ε}do |

6: | |

and | |

7: | end for |

8: | |

and where ω_{j}, _{i}, _{i} and | |

9: | end for |

10: | Update the coefficients by solving the linear system |

where | |

11: | Set l = l + 1. |

12: | end while |

13: | Solve for the coefficients of the consumption policy approximant |

where |

The maximization step in line 4 of the algorithm can be performed in a variety of ways; we use a binary search method that exploits the monotonicity of the value function with respect to *l* − 100⌊^{l}/100⌋ = 0 (that is, when *l* modulo 100 is zero) and otherwise computing

and

Note that the polynomial approximation for consumption, computed in line 13, is a byproduct of the solution and is not used within the algorithm to obtain the solution. For our particular implementation of the projection algorithm, we use Chebyshev basis functions and their collocation points; this method allows us to choose the *N _{k}* values of

*N*=

_{k}*M*. The latter property results in a square polynomial matrix

*M*= 6 (an order 5 polynomial) provides accurate solutions to the problem.

## B Adjustment cost parameters

We define the parameters *α*_{1} and *α*_{2} of the adjustment cost function. We want to specify a function that does not impose adjustment costs in the deterministic steady state; i.e. a function that satisfies

and

where

Since *α*_{1}, Equation (39) is then satisfied if

Combining Equations (39) and (41), it’s clear that

from which we conclude

and

## C Derivation of return on equity

The Lagrangian for the firm’s problem is:

The first order condition with respect to *I _{t}* is

which implies,

The first order condition with respect to *K _{t+}*

_{1}is

Using Equation (45) we substitute for

Substituting for *μ*_{t} and *μ*_{t+1}, and recognizing

where

Equation (49) is the standard Euler condition for the return on investment,

## D Maximization algorithm

We present the binary search algorithm that we use to select the optimal consumption value in line (A) of the projection algorithm. This method exploits the monotonicity of the value function with respect to _{t} and converges very quickly. It is performed for each value in the capital grid,

1: | Set τ^{c} = 0.000001, ^{c} = 1, c = 0 and ^{min} |

2: | while Δ^{c} > τ^{c}do |

3: | |

4: | form = 1 to 2 do |

5: | |

6: | steps A – A of the projection algorithm |

7: | |

8: | end for |

9: | ifv^{1} > v^{2}then |

10: | c = ^{max}c^{1} |

11: | else |

12: | c = ^{min}c^{2} |

13: | end if |

14: | Δ^{c} = c − ^{max}c^{min} |

15: | end while |

16: |

### D.1 Radius of convergence: motivating example

We consider the very simple example of approximating the square root function, ^{[4]}. Judd (1998) provides similar numerical results for *x*^{1/4}. Generally speaking, any (analytic) continuously differentiable function can be written as,

for *x* in a neighborhood of *x*_{0}, where *f*^{(i)} denotes the *i*th derivative of *f* and where *f*^{(0)} = *f*. Applying this result to the square root function,

where *n*!! denotes the double factorial of *n*^{[5]}. For a general power series of the form

Hence, for the square root function (assuming *x*_{0} > 0),

Equation (54) states that the Taylor series expansion of the square root function around *x*_{0} is only guaranteed to converge for *x* ∈ (0, 2*x*_{0}). Outside of this range, the series expansion will diverge.

Intuitively, the radius of convergence for a Taylor series depends on the rate at which the derivatives of the target function diminish at the point of approximation. For a function with little shape, the high-order derivatives drop quickly to zero, forcing *r* to be quite high – the extreme case being a polynomial of finite order, with an infinite radius of convergence (the Taylor series converges for all ^{[6]}.

To illustrate this concept, we approximate the square root function at two values: *x*_{0} = 1 and *x*_{0} = 2. In the first case the radius of convergence is 1 and we anticipate that a Taylor series approximation will only be appropriate in the range (0, 2). The blue lines in Figure 14 depict the first nine Taylor polynomial approximations of *x*_{0} = 1. Clearly, the polynomial approximations are adequate for *x*_{0} = 2, confirming our prior intuition.

This example illustrates that there is an inverse relationship between the degree of curvature of a function at a point of interest and the size of the interval (around that point) over which a Taylor polynomial approximation is adequate.

## References

Aruoba, S. B., J. Fernández-Villaverde, and J. F. Rubio-Ramírez. 2006. “Comparing Solution Methods for Dynamic Equilibrium Economies.” *Journal of Economic Dynamics and Control* 30: 2477–2508.10.1016/j.jedc.2005.07.008Search in Google Scholar

Bansal, R., and A. Yaron. 2004. “Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles.” *The Journal of Finance* LIX: 1481–1509.10.3386/w8059Search in Google Scholar

Bansal, R., D. Kiku, and A. Yaron. 2007. “Risks For the Long Run: Estimation and Inference.” Working Paper.Search in Google Scholar

Brumm, J., and S. Scheidegger. 2015. “Using Adaptive Sparse Grids to Solve High-Dimensional Dynamic Models.” Working Paper, 1–39.Search in Google Scholar

Cai, Y., K. L. Judd, and J. Steinbuks. 2015. “A Nonlinear Certainty Equivalent Approximation Method for Dynamic Stochastic Problems.” Working Paper, 1–61.10.3386/w21590Search in Google Scholar

Caldara, D., J. Fernández-Villaverde, J. F. Rubio-Ramírez, and W. Yao. 2012. “Computing DSGE Models with Recursive Preferences and Stochastic Volatility.” *Review of Economic Dynamics* 15: 188–206.10.1016/j.red.2011.10.001Search in Google Scholar

Campanale, C., R. Castro, and G. L. Clementi. 2010. “Asset Pricing in a Production Economy with Chew-Dekel Preferences.” *Review of Economic Dynamics* 13: 379–402.10.1016/j.red.2009.06.005Search in Google Scholar

Croce, M. M. 2013. “Welfare Costs in the Long Run.” Working Paper.10.2139/ssrn.2060997Search in Google Scholar

Den Haan, W. J., and A. Marcet. 1994. “Accuracy in Simulations.” *The Review of Economics Studies* 61: 3–17.10.2307/2297873Search in Google Scholar

Den Haan, W. J., and J. de Wind. 2009. “How Well-Behaved are Higher-Order Perturbation Solutions?” Working Paper.Search in Google Scholar

Epstein, L. G., and S. E. Zin. 1989. “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework.” *Econometrica* 57: 937–969.10.1142/9789814417358_0012Search in Google Scholar

Fernández-Villaverde, J., and O. Levintal. 2016. “Solution Methods for Models with Rare Disasters.” Working Paper, 1–37, http://www.nber.org/papers/w21997.10.3386/w21997Search in Google Scholar

Fernández-Villaverde, J., G. Gordon, P. Guerrón-Quintana, and J. F. Rubio-Ramírez. 2015. “Nonlinear Adventures at the Zero Lower Bound.” *Journal of Economic Dynamics and Control* 57: 182–204.10.1016/j.jedc.2015.05.014Search in Google Scholar

Irarrazabal, A., and J. C. Parra-Alvarez. 2015. “Time-Varying Disaster Risk Models: An Empirical Assessment of the Rietz-Barro Hypothesis.” Working Paper, 1–46.10.2139/ssrn.2559074Search in Google Scholar

Jermann, U. J. 1998. “Asset Pricing in Production Economies.” *Journal of Monetary Economics* 41: 257–275.10.1016/S0304-3932(97)00078-0Search in Google Scholar

Judd, K. L. 1998. *Numerical Methods in Economics*. Cambidge, MA: MIT Press.Search in Google Scholar

Judd, K. L., and S.-M. Guu. 1997. “Asymptotic Methods for Aggregate Growth Models.” *Journal of Economic Dynamics and Control* 21: 1025–1042.10.1016/S0165-1889(97)00015-8Search in Google Scholar

Judd, K. L., L. Maliar, and S. Maliar. 2011. “Numerically Stable and Accurate Stochastic Simulation Approaches for Solving Dynamic Economic Models.” *Quantitative Economics* 2: 173–210.10.3982/QE14Search in Google Scholar

Judd, K. L., L. Maliar, S. Maliar, and R. Valero. 2014. “Smolyak Method for Solving Dynamic Economic Models: Lagrange Interpolation, Anisotropic Grid and Adaptive Domain.” *Journal of Economic Dynamics and Control* 44: 92–123.10.3386/w19326Search in Google Scholar

Kaltenbrunner, G., and L. Lochstoer. 2008. “Long-Run Risk through Consumption Smoothing.” Working Paper.10.2139/ssrn.965702Search in Google Scholar

Levintal, O. 2016. “Taylor Projection: A New Solution Method to Dynamic General Equilibrium Models.” Working Paper, 1–62.10.2139/ssrn.2728858Search in Google Scholar

Maliar, L., and S. Maliar. 2015. “Merging Simulation and Projection Approaches to Solve High-Dimensional Problems with an Application to a New Keynesian Model.” *Quantitative Economics* 6: 1–47.10.3982/QE364Search in Google Scholar

Mehra, R., and E. C. Prescott. 1985. “The Equity Premium: A Puzzle.” *Journal of Monetary Economics* 15: 145–161.10.1016/0304-3932(85)90061-3Search in Google Scholar

Parra-Alvarez, J. C. 2017. “A Comparison of Numerical Methods for the Solution of Continuous-Time DSGE Models.” *Macroeconomic Dynamics* 22: 1555–1583.10.1017/S1365100516000821Search in Google Scholar

Pohl, W., K. Schmedders, and O. Wilms. 2015. “Higher-Order Effects in Asset-Pricing Models with Long-Run Risks.” Working Paper, 1–57.10.1111/jofi.12615Search in Google Scholar

Restoy, F., and G. M. Rockinger. 1994. “On Stock Market Returns and Returns on Investment.” *The Journal of Finance* XLIX: 543–556.10.1111/j.1540-6261.1994.tb05151.xSearch in Google Scholar

Rudebusch, G. D., and E. T. Swanson. 2012. “The Bond Premium in a DSGE Model with Long-Run Real and Nominal Risks.” *American Economic Journal: Macroeconomics* 4: 105–143.10.1257/mac.4.1.105Search in Google Scholar

Schmitt-Grohé, S., and M. Uribe. 2004. “Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function.” *Journal of Economic Dynamics and Control* 28: 755–775.10.1016/S0165-1889(03)00043-5Search in Google Scholar

Weil, P. 1989. “The Equity Premium Puzzle and the Risk-Free Rate Puzzle.” *Journal of Monetary Economics* 24: 401–421.10.1016/0304-3932(89)90028-7Search in Google Scholar

Weil, P. 1990. “Nonexpected Utility in Macroeconomics.” *The Quarterly Journal of Economics* 105: 29–42.10.2307/2937817Search in Google Scholar

**Published Online:**2019-12-14

© 2019 Walter de Gruyter GmbH, Berlin/Boston