Skip to content
Publicly Available Published by De Gruyter December 14, 2019

Computational Methods for Production-Based Asset Pricing Models with Recursive Utility

  • Eric Mark Aldrich EMAIL logo and Howard Kung

Abstract

We compare local and global polynomial solution methods for DSGE models with Epstein- Zin-Weil utility. We show that model implications for macroeconomic quantities are relatively invariant to choice of solution method but that a global method can yield substantial improvements for asset prices and welfare costs. The divergence in solution quality is highly dependent on parameters which affect value function sensitivity to TFP volatility, as well as the magnitude of TFP volatility itself. This problem is pronounced for calibrations at the extreme of those accepted in the asset pricing literature and disappears for more traditional macroeconomic parameterizations.

1 Introduction

This paper compares solution methods for production-based asset pricing models with recursive utility. In particular, we consider a standard neoclassical growth model with Epstein-Zin-Weil utility. Bansal and Yaron (2004) demonstrate the importance of recursive utility for explaining general equilibrium asset prices in an endowment economy with an exogenously specified aggregate consumption process. Subsequent extensions to production economies, e.g. Croce (2013), Campanale, Castro, and Clementi (2010), Kaltenbrunner and Lochstoer (2008), and Rudebusch and Swanson (2012), show similar promise in reconciling financial and quantity dynamics and are becoming increasingly common in the literature. Accordingly, it is important to understand the behavior of solution methods for these models, and our work contributes to the literature along that dimension.

We focus on two standard methods of solution for dynamic stochastic general equilibrium (DSGE) models: a global projection method using Chebyshev basis functions and a local perturbation method of various orders. Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) compare these solution methods for a neoclassical growth model with constant relative risk aversion utility, and demonstrate that both methods are broadly suitable. Our aim is to investigate the appropriateness of these solution methods when utility takes a more general, recursive form. Notably, we find that the two solution methods produce roughly equivalent implications for macroeconomic quantities, but can be very different with respect to asset prices and welfare costs. This discrepancy is exacerbated as the volatility of the total factor of productivity (TFP) is increased.

Several considerations drive our results. First, the value function corresponding to Epstein-Zin-Weil utility exhibits a high degree of curvature with respect to the TFP volatility; second, perturbation is a local Taylor approximation around the deterministic steady state, where the TFP volatility is zero; and third, in the presence of recursive utility, asset prices and welfare costs, not quantities, depend critically on the shape of the value function. The upshot is that for models with high output volatility, perturbation attempts to approximate a rapidly changing function at a locus (zero TFP volatility) that is far from the point of interest (high TFP volatility). Taylor’s Theorem shows that the resulting approximation error can be arbitrarily bad and can even increase with the order of approximation. Thus, when the TFP is calibrated to have high volatility, the resulting approximation of the value function obtained by perturbation can be very bad, yielding poor approximations for prices and welfare costs.

In addition to the direct effect of TFP volatility on approximation quality, we investigate the effect of other parameters. We find that the subjective discount factor, output growth and risk sensitivity are also important for solution accuracy, but in a more indirect manner: the solution methods diverge when these parameters increase the value function’s sensitivity to TFP volatility. We describe the interplay between these parameters and solution accuracy.

Our problem dictates that, at a minimum, we must approximate the value function and policy function of the endogenous choice variable (consumption). In theory, if our solutions are sufficiently accurate, we can use them to accurately approximate any other variable in the model, even if it depends on the value and policy functions in a nonlinear way. The reverse is also true – a poor solution for either the value or policy functions will contaminate approximations of other model quantities. However, in the case of perturbation, we find that computing a local approximation of asset prices directly can ameliorate their dependency on the (poorly approximated) value function and improve their accuracy. We show how this modification can bring the model’s asset pricing implications far closer to those produced by a global method. Unfortunately, the same result does not hold for welfare costs.

Pohl, Schmedders, and Wilms (2015) document similar findings in the context of a standard long-run risk endowment economy. In particular, they compare a local, log-linear solution with global projection methods and demonstrate that the log-linear solution gives rise to very large approximation errors. Caldara et al. (2012) compare perturbation and projection methods for a production economy with recursive utility and stochastic TFP volatility and find that local methods are very accurate and strike an excellent balance between computation time and accuracy. Their results, however, are narrowly applicable to macroeconomic (not asset price) dynamics because they explore a limited parameter space which simultaneously precludes the numerical problems we document as well as the ability to generate sufficient volatility for asset returns. Our work expands the parameter space so as to generate reasobale asset price dynamics and shows local approximations deteriorate in precisely this region. In a similar vein, Parra-Alvarez (2017) extend Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) to continuous-time models. Specifically, that work, which confines attention to separable utility, concludes that projection methods are superior to perturbation, but that the latter typically strike a good balance between accuracy and computational cost.

Recent literature shows that as economic models push the boundaries of complexity, global solution methods are a requisite. Fernández-Villaverde et al. (2015) document that perturbation methods are not adequate when dealing with non-linearities that are associated with a kink in interest rate policy. Likewise, Fernández-Villaverde and Levintal (2016) compare solution methods for a model with time-varying probabilities of rare disasters and Epstein-Zin-Weil utility and find that low-probability, high-impact events induce non-linearities that cannot be explained with high-order perturbation methods. Irarrazabal and Parra-Alvarez (2015) document similar findings for a continuous-time, time-varying rare disaster model with recursive utility.

When perturbation is accurate, it is often the preferred solution method since it is extremely fast and can be efficiently nested within an estimation framework. For the particular problem we consider, the projection and perturbation methods are essentially equivalent in computation time, as the model only entertains one state variable[1]. However, if the complexity of the model is increased (more state variables added), a standard global approximation method would quickly deteriorate in computing time due to the curse of dimensionality. For this reason, much of the recent literature in computational economics has been devoted to improving global methods for high-dimensional problems. Judd, Maliar, and Maliar (2011) introduce a simulation-based solution method for DSGE models that ameliorates the curse of dimensionality by focusing on the ergodic region of the solution space. Maliar and Maliar (2015) further improve the aforementioned simulation method by employing a clustering algorithm to reduce the dimensionality of the ergodic space. Judd et al. (2014) and Brumm and Scheidegger (2015) make a number of advances for Smolyak sparse grids, which reduce the sensitivity of global approximation methods to the dimensionality of the model parameter space. Finally, Cai, Judd, and Steinbuks (2015) and Levintal (2016) deploy entirely new approaches to arrive at global approximations for high-dimensional problems. The former, the non-linear certainty equivalent method, is suitable for DSGE models with up to 400 state variables; the latter, “Taylor projection”, exploits the efficiency of Newton’s method to quickly arrive at global approximations.

The results of this paper suggest that when using perturbation for production-based asset pricing models, it would be wise to compare the results with a global solution method. In particular, for models with a high calibration of TFP volatility, a global method, while computationally more burdensome, is more suitable since it yields a superior approximant to the value function.

This paper proceeds as follows. Section 2 describes our model in detail, Section 3 outlines the solution methods applied to the model, Section 4 reports the results of the two solution methods in addition to diagnostics that explore their accuracy and Section 5 concludes.

2 Model

We follow Kaltenbrunner and Lochstoer (2008) in specifying a basic neoclassical growth model with Epstein-Zin-Weil utility. As mentioned in the previous section, this model is very similar to that of Croce (2013), the primary difference being that Croce (2013) explicitly specifies a long-run risk component in the total factor of productivity process. While Croce (2013) does a better job of explaining both quantity and price dynamics, and while the model of Kaltenbrunner and Lochstoer (2008) exhibits several deficiencies with respect to asset prices, we choose the latter specification for two reasons. First, it is a more simple generalization of the neoclassical growth model, obtained by substituting Epstein-Zin-Weil utility for power utility, and nests the more standard model. Second, through appropriate normalization, the model has only one state variable. This simplicity allows us to emphasize the computational results with greater ease. We anticipate that the results will extend to more complicated, and perhaps appealing, situations.

2.1 Preferences

Our economy admits a representative agent whose utility function follows Epstein and Zin (1989) and Weil (1990):

(1)U(C¯t)=((1β)Ct1γθ+β(Et[U(C¯t+1)1γ])1θ)θ1γ,

where 0 < β < 1 is the subjective discount factor, 𝔼t is the conditional expectations operator, Ct denotes aggregate consumption, C¯t=(Ct,Ct+1,), γ denotes the agent’s coefficient of relative risk aversion, ψ denotes the agent’s inter-temporal elasticity of substitution (IES), and θ=1γ11/ψ. A particularly desirable feature of this utility function is that it separates the IES and risk-aversion parameters, as opposed to the standard constant relative risk aversion (CRRA) utility, where IES and risk aversion are inversely related. In theory, it is not clear that there should be a tight link between these two parameters, as risk aversion is atemporal while IES is temporal.

As shown in Epstein and Zin (1989) and Weil (1989), the log of the stochastic discount factor, mt+1, for these preferences is

(2)mt+1=θlnβθψΔct+1(1θ)ra,t+1,

where Δct+1 denotes log consumption growth and ra,t+1 denotes the log gross return on the aggregate wealth portfolio. Of particular importance is the presence of ra,t+1 in the specification of the discount factor, which makes innovations to expected consumption growth a priced risk factor; in the standard model with CRRA utility, θ = 1 and the last term disappears. It is this feature of Epstein-Zin-Weil utility that makes agents concerned about shocks to expected future consumption growth and that allows us to amplify the equity premium (Mehra and Prescott, 1985) while evading the risk-free rate puzzle (Weil (1989)).

2.2 Technology

A single firm owns the capital stock and produces a consumption good via Cobb-Douglas technology, using labor and capital as inputs:

Yt=(ZtHt)(1α)Ktα,

where Zt is the labor-augmenting stochastic technology level, Ht denotes the number of hours worked, Kt represents capital and α is the share of capital in the production function. Under the assumption that H¯ is the agent’s total leisure endowment, it is clear that utility is going to be maximized when Ht=H¯, ∀t, since Ht does not appear in the utility function. Normalizing H¯=1, the production function simplifies to

(3)Yt=Zt(1α)Ktα.

The log technology process, zt=ln(Zt), evolves exogenously according to

(4)zt=μt+z~t,
(5)z~t=φz~t1+σzϵt,
(6)ϵtN(0,1).

We limit ourselves to the special case of φ = 1, since this allows us to retain only one state variable; our computational results are similar for the case of persistent, yet trend stationary zt (when |φ| < 1). Hence, the log of TFP is a random walk with drift parameter μ, and in this special case shocks to technology are permanent.

2.3 Capital accumulation

Following Jermann (1998), we allow capital adjustment costs in the accumulation equation

(7)Kt+1=ϕ(ItKt)Kt+(1δ)Kt,

where

(8)ϕ(x)=α111/ξx11/ξ+α2

is an increasing, concave function which induces large changes in the capital stock to be more costly than successive small changes. The parameter ξ governs the degree of concavity and has the desirable feature that as ξ → ∞, ϕ(x) becomes the identity function (with an appropriate specification of α1 and α2); that is, capital adjustment costs disappear. At the other extreme, as ξ → 0, It → 0, ∀t, and ϕ(x)exp(μ)1+δ, allowing us to obtain an endowment economy where all output is consumed each period and the capital stock grows deterministically at rate exp(μ). Hence, for intermediate values, the adjustment cost parameter, ξ, allows us flexibility in matching the relative volatilities of consumption and output. As mentioned above, the remaining parameters are defined so as to eliminate adjustment costs in the deterministic steady state: α1=(exp(μ)1+δ)1/ξ and α2=11ξ(exp(μ)1+δ) (see B for a derivation).

2.4 Equilibrium

In equilibrium, the aggregate resource constraint is binding:

(9)Ct+It=Yt.

In this basic environment, the welfare theorems are satisfied and the solution to the social planner’s problem yields the same allocations as a competitive equilibrium. The planner’s problem is

(10)V(Kt,Zt)=maxCt,Kt+1[(1β)Ct1γθ+β(Et[V(Kt+1,Zt+1)1γ])1θ]θ1γ

subject to

(11)Kt+1=ϕ(Zt1αKtαCtKt)Kt+(1δ)Kt,
(12)Zt=Zt1exp(μ+σzεt),εtN(0,1).

By solving the first-order conditions, we obtain the intertemporal Euler equation

(13)Et[Mt+1ϕ(ItKt)((α1)Yt+1+Ct+1Kt+1+ϕ(It+1Kt+1)+1δϕ(It+1Kt+1))]=1

where

(14)Mt+1=β(Ct+1Ct)1ψVt+11ψγ(Et[Vt+11γ])11θ

is an alternative expression for the Epstein-Zin-Weil stochastic discount factor, equivalent to exp(mt+1) in Equation (2). As shown in C, the return on equity is

(15)Rt+1E=ϕ(ItKt)((α1)Yt+1+Ct+1Kt+1+ϕ(It+1Kt+1)+1δϕ(It+1Kt+1)),

which is the term scaling the stochastic discount factor in Equation (13). Hence, the Euler equation can be written compactly as Et[Mt+1Rt+1E]=1. This expression for the return to equity will be useful when evaluating the quality of model solutions.

To preserve stationarity in the economy, we normalize all variables by the level of the contemporaneous technology process:

(16){C^t,K^t,Z^t+1,I^t,Y^t,V^t}={CtZt,KtZt,Zt+1Zt,ItZt,YtZt,VtZt}.

Since the preferences represented by Equation (1) are homothetic (Epstein and Zin, 1989), the normalized system of equilibrium conditions can then be expressed as

(17)V^(K^t)((1β)C^t1γθ+β(Et[Z^t+11γV^(K^t+1)1γ])1θ)θ1γ=0
(18)Et[M^t+1ϕ(I^tK^t)((α1)Y^t+1+C^t+1K^t+1+ϕ(I^t+1K^t+1)+1δϕ(I^t+1K^t+1))]1=0
(19)K^t+11Z^t+1((1δ)K^t+ϕ(I^tK^t)K^t)=0

where

(20)Z^t=exp(μ+σzεt),εtN(0,1).

Augmenting Equations (17)–(20) with appropriately scaled versions of Equations (14), (3), and (9) yields the system of equations that we solve with the methods of the following section.

3 Solution methods

We now describe the two methods we use to solve the model of the previous section.

3.1 Perturbation

Perturbation methods, suggested for economic models by Judd and Guu (1997) and Judd (1998), and widely popularized by Schmitt-Grohé and Uribe (2004), build an asymptotically valid polynomial approximation of a function around a point where the solution is known. In general notation, perturbation seeks a local approximation to a function, F, where

(21)F(x(ε),ε)=0,

and where F(x(0), 0) is known. The typical specialization in economics is for F to represent a system of nonlinear stochastic difference equations,

(22)F(x(ε),ε)=Et[f(xt+1(ε),xt(ε),ε)]=0,

where the deterministic steady state, Et[f(xt+1(0),xt(0),0)]=f(xss,xss,0) is known. The canonical economic example is the neoclassical growth model, where f is a system of equations including the inter-temporal Euler equation and constraints, and where the polynomial approximation to f is a Taylor expansion. However, there is no a priori reason to restrict our attention to the inter-temporal Euler equation; since we are interested in computing financial moments and since bond prices in a recursive utility model depend on the value function, it is natural for us to approximate the value function directly. Judd and Guu (1997) and Judd (1998) are early examples of using the value function to generate perturbation conditions. Our particular solution method utilizes both the value function and the intertemporal Euler equation.

We use system (17)–(19) to build approximations of the value and policy functions:

(23)V~pert(K^,σz)=i,jV^ss(i,j)(K^K^ss)iσzj
(24)C~pert(K^,σz)=i,jC^ss(i,j)(K^K^ss)iσzj

where

(25)V^ss(i,j)=(1i!j!)i+jV^tiK^tjσz|K^ss,0

and

(26)C^ss(i,j)=(1i!j!)i+jC^tiK^tjσz|K^ss,0.

To obtain these approximations, we take successive derivatives of Equations (17) and (18) with respect to K^t and σz, and evaluate the resulting systems of equations at the deterministic steady state to obtain closed form solutions for the coefficients in Equations (25) and (26).

For example, evaluating Equations (17)–(20) at the deterministic steady state is sufficient to determine Z^ss, K^ss, V^ss(0,0) and C^ss(0,0). Taking first derivatives (with respect to K^t and σz) of Equations (17) and (18) and again evaluating at the deterministic steady state allows us to solve for V^ss(1,0), V^ss(0,1), C^ss(1,0) and C^ss(0,1). Continuing in this fashion leads to the approximations in Equations (23) and (24), where the order of approximation is equivalent to the number of times we have differentiated Equations (17) and (18). We emphasize that the full system (17)–(20) is only used to determine the deterministic steady state, and the polynomial approximations are constructed by taking derivatives of only Equations (17) and (18). As mentioned in Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006), the first order solution involves a quadratic matrix equation, but each order of approximation thereafter only necessitates the solution of a linear system. Hence, higher order solutions only require a matrix inversion, albeit of rapidly increasing size.

With approximations V~pert(K^,σz) and C~pert(K^,σz) in hand, we can compute any other variable in the economy, where the accuracy of the approximation of those variables will depend on the underlying accuracy of our approximations for V^t and C^t. However, we can also approximate other variables of interest by augmenting system (17)–(19) with additional equilibrium conditions. In our case, we are interested in approximating both the risk-free rate and log(V^t/C^t)_ (which we will use in computing welfare costs). The requisite equilibrium conditions are

(27)Rt+1fEt[Mt+1]1=0
(28)LVCtlog(V^t/C^t)=0.

Adding Equations (27) and (28) to system (17)–(19) allows us to obtain approximations

(29)Rf~pert(K^,σz)=i,jRf^ss(i,j)(K^K^ss)iσzj
(30)LVC~pert(K^,σz)=i,jLVC^ss(i,j)(K^K^ss)iσzj

as outlined above.

3.2 Projection

Similar to perturbation, projection methods seek a polynomial approximation to a function, F, as in Equation (21), or more commonly to the special case, f, as in Equation (22). However, rather than using the known solution at ε = 0 to construct a local approximation, we specify a polynomial expansion, x^, with coefficients chosen to minimize f(x^) globally, over the domain of x. As before, for the neoclassical growth model, f would be comprised of the inter-temporal Euler equation and constraints, and a projection solution would specify a polynomial expansion of the consumption policy that would minimize the Euler equation error.

The analogous approach to our problem would be to use Equations (17) and (18) to obtain approximations,

(31)V~proj(K^)=j=0Majφj(K^)

and

(32)C~proj(K^)=j=0Mbjφj(K^),

where M is the order of approximation and φj, j=1,2,, represent a set of linearly independent polynomial basis functions. That is, given an order of approximation M, we could specify a grid of NM + 1 points for K^ and evaluate Equations (17) and (18) [coupled with the constraint (19)] at those points to obtain a system of 2N equations in 2(M + 1) unknowns. We could then use a nonlinear solution method to find the coefficients aj and bj for j = 1, …, M, in Equations (31) and (32), that best satisfy (17) and (18).

As there is no theorem to guarantee convergence of the preceding approach, we follow an alternative methodology, suggested by Campanale, Castro, and Clementi (2010) and Kaltenbrunner and Lochstoer (2008), which is to couple polynomial approximations of the value and policy functions with value function iteration. Specifically, we seek a polynomial approximation to the value function as in Equation (31). Letting NkM, we specify a (not necessarily equally spaced) grid for K^, spanning the values (0.1K^ss,1.9K^ss). Additionally, we set Nε=M+12 and confine ε to the order Nε Gauss-Hermite abscissae. To ease notation, we suppress time subscripts, collect the Nk values of K^ in the vector K^ and group the basis functions evaluated at each value of K^ in the matrix Φ(K^), where Φ(K^)ij=φj(K^i). Using this notation, V~proj(K^)=Φ(K^)a. As this solution method is not standard practice, we provide pseudocode in A.

4 Results

We now apply the solution methods outlined in the previous section to the model of Section 2 and state the main result of our paper: while the quantity dynamics of the two methods are essentially equivalent for a variety of parameter values, the same is not true of variables that are tightly linked to the value function, such as asset prices and welfare costs. We discuss the reasons for this result and outline a very simple motivating example that provides intuition for the particular problem we consider. We conclude the section by reporting diagnostics which compare the accuracy of the solution methods.

4.1 Calibration

We fix several parameters of our model and report them in Table 1. These values are a widely accepted parameterization of the US economy in the literature; in particular, the depreciation rate and share of capital, δ and α, respectively, are identical to those of Jermann (1998). The quarterly growth rate, μ, implies annual growth of 1.6 percent and the intertemporal elasticity of substitution (IES) parameter, ψ, is set in the middle of the range (1,2] advocated by Bansal, Kiku, and Yaron (2007). In reality, we considered alternative values of μ and ψ but do not report the corresponding solutions and simulations as they do not alter the qualitative nature of our results. Finally, the adjustment cost parameter, ξ, was chosen so that the ratio of volatilities of log consumption growth to log output growth matches empirical estimates (in the vicinity of 0.5), which depend on the time period and frequency of the data (see discussion below).

Table 1:

Quarterly model calibration.

αδψμξ
0.360.0251.50.00413

To understand our parameterization of the TFP volatility, it is instructive to consider the data moments reported in Table 2. The table contains means and volatilities for GDP, aggregate consumption and the 90 T-bill, both at annual and quarterly frequencies, for several sample periods. We highlight two important features of the data. First, the volatility of log output growth is markedly different between pre-war and post-war samples, the former being roughly 2.5 to 3 times as great as the latter. Second, the mean of the risk-free rate increases and its volatility decreases as the time horizon is curtailed to include fewer years. In the case of the mean, values in later samples are up to twice as large as that of the pre-war sample.

Table 2:

Data moments for different periods and frequencies.

1929–2008 (A)1950–2008 (A)1950–2008 (Q)1960–2008 (Q)1970–2008 (Q)
Std(Δc)0.01080.005570.004900.004510.00429
Std(Δy)0.02460.01080.009800.008570.00840
Std(Δc)/Std(Δy)0.4390.5180.5000.5260.510
Mean (rf)0.008470.01460.01460.01730.0168
Std(rf)0.01190.007470.007470.007230.00801
  1. “A” denotes annual frequency and “Q” denotes quarterly frequency. Quarterly samples begin with the first quarter of the stated year and end with the final quarter of 2008. “c” and “y” denote the log of real consumption (nondurables plus services) and GDP, respectively, and are obtained from NIPA Table 1.1.4–1.1.6, with annual values scaled to quarterly for comparison. “rf” denotes the net return on the 90 T-bill, obtained from CRSP (monthly frequency for all horizons), converted to real by subtracting the 12 month lagged moving average of CPI return (as a forecast of expected inflation). The risk-free is annualized by a simple scale factor.

As a result of the variance in sample moments across sub-periods, we observe a wide range of calibrated values for the TFP volatility, σz, and the discount factor, β, in the literature. Since Std(Δy)(1α)σz, models that calibrate to post-war, quarterly data often specify much smaller values of σz than models which calibrate to pre-war, annual data. Hence, we allow σz{0.01,0.02,0.03,0.04}, which correspond to Std(Δy){0.0064,0.0128,0.0192,0.0256}, a range that encompasses the moments reported in Table 2.

For the remaining parameters, the subjective discount factor, β, and coefficient of relative risk aversion, γ, we entertain β ∈ [0.980, 0.998] and γ ∈ {2, 5, 10}. We choose these values because they not only encompass accepted values in the literature, but they allow a broad enough range of parameterizations to investigate their effect on the sensitivity of the value function to σz. In general, we are primarily concerned with β > 0.99, as these higher values are requisite for matching moments of the risk-free asset.

4.2 Model implications

4.2.1 Low volatility

Table 3 reports simulation results for both projection and perturbation methods when σz = {0.01, 0.02} and when γ = 5. We set β = 0.998, which simultaneously yields a risk-free rate in the neighborhood of those observed in post-war data and that allows us to closely approximate the historical pre-war risk-free rate of 0.00847 (see Table 4) when σz > 0.02. We use fifth order Cheybshev polynomials for projection and third order Taylor expansions in the case of perturbation – for the projection method, higher order approximations make little material difference to the stated results, and for the perturbation method, numerical instabilities lead to potentially greater discrepancies than those reported[2]. Finally, moments are computed by simulating 100,000 quarterly observations and then aggregating financial variables to an annual frequency by a simple scale factor.

Table 3:

Simulation moments for both projection (5th order) and perturbation (3rd order) methods, for σz = {0.01, 0.02}, γ = 5 and β = 0.998.

σz = 0.01σz = 0.02
ProjPertNPertProjPertNPert
Std(Δc)0.003530.003520.003520.007040.007020.00702
Std(Δy)0.006430.006430.006430.01290.01290.0129
Std(Δc)/Std(Δy)0.5490.5490.5490.5480.5460.546
Std(Δi)/Std(Δy)1.851.851.851.841.841.84
E[Rf]0.01820.01810.01900.01630.0161NaN
Std(Rf)0.001160.001150.001140.002320.00229NaN
E[RERf]0.00008210.000213−0.0006580.0006530.000845NaN
Std(RERf)0.002210.002210.002210.004400.00440NaN
SR (RE)0.03710.0964−0.2970.1480.192NaN
E(logV^/C^)3.013.002.942.312.12NaN
  1. Simulations are quarterly and financial moments are annualized.

We begin by considering the projection results in Table 3. Setting σz = 0.01 results in an output volatility slightly lower than observed in quarterly data and ξ allows us to fix the ratio Std(Δc)/Std(Δy). Hence, it is not surprising that the standard deviations of consumption and output are not drastically different than their counterparts in the data. The remaining moments are freely determined, and in some cases are quite different from observed values. In particular, the equity premium and its volatility are extremely low and the volatility of the risk-free is about six to seven times smaller than what would be expected in the data. In fact, Kaltenbrunner and Lochstoer (2008) find that while holding the Sharpe ratio of equity fixed, there is a trade-off in matching the mean and variance of the equity asset and the volatility of the risk-free. We emphasize that a simple modification to the model à la Croce (2013) (explicitly parameterizing a time varying growth rate in the TFP process) can rectify some of these issues. However, in order to highlight our computational results, we favor parsimony and forsake the additional state variable.

The remaining columns of Table 3 report simulation results for perturbation, for both the case where Rtf and log(V^t/C^t) are computed with a direct local approximation (the column denoted “Pert”) and where they are computed nonlinearly with the local solutions of the value function and consumption policy (the column denoted “NPert”). Regardless of the solution method and the value of σz, we see that a third order perturbation yields quantity dynamics that are almost identical to those of projection. The same is not true of asset pricing moments and log(V^t/C^t). When σz = 0.01, both variants of the perturbation method generate simulated moments that are in close agreement with projection, the one exception being the equity premium, which is extremely close to zero in all cases. However, increasing the TFP volatility to σz = 0.02 renders the nonlinear perturbation unable to compute asset prices and log(V^t/C^t). The reason is that in the presence of higher volatility, explosive paths of the value function solution results in negative values under a radical or log function, precluding our ability to compute the corresponding moments. These values are reported as “NaN”. Alternatively, with the direct perturbation, obtained by augmenting the perturbation conditions with Equations (29) and (30), we are able to drastically improve the simulated moments of the local method; in this case, column “Pert” shows that the local method only exhibits slight deviations from the global method for asset prices, again with the exception of the equity premium which is very close to zero in both cases. The deviation for log(V^t/C^t) is slightly larger, but not horrendous. As we mention in Section 3, there is no theoretical reason to resort to direct approximations for ancillary model variables; in fact, if the solutions for V^t and C^t were good enough, any nonlinear function of them would also yield a highly accurate approximation. However, as we will see below, perturbation delivers a poor solution of V^t when σz is high. The result is that a direct local approximation of Rtf and log(V^t/C^t) reduces the dependency of these variables on the value function and improves their accuracy.

4.2.2 High volatility

Asset pricing papers that match moments of annual, pre-war data sets are too numerous to cite. Table 4 reports simulation results for both projection and perturbation methods when σz = {0.03, 0.04}, the latter being a value that conforms to pre-war, annual parameterizations. As before, γ = 5 and β = 0.998.

Table 4:

Simulation moments for both projection (5th order) and perturbation (3rd order) methods, for σz = {0.03, 0.04}, γ = 5 and β = 0.998.

σz = 0.03σz = 0.04
ProjPertNPertProjPertNPert
Std(Δc)0.01050.01050.01050.01400.01380.0138
Std(Δy)0.01930.01930.01930.02570.02570.0257
Std(Δc)/Std(Δy)0.5470.5430.5430.5430.5370.537
Std(Δi)/Std(Δy)1.821.831.831.801.811.81
E[Rf]0.01300.0127NaN0.008470.00779NaN
Std(Rf)0.003450.00343NaN0.004550.00468NaN
E[RERf]0.001660.00195NaN0.002990.00370NaN
Std(RERf)0.006510.00654NaN0.008550.00860NaN
SR (RE)0.2540.299NaN0.3500.430NaN
E(logV^/C^)1.440.663NaN0.561−1.38NaN
  1. Simulations are quarterly and financial moments are annualized.

The previous discrepancies now become exaggerated: while the global method and both variations of the local method show high agreement for quantity dynamics, solutions for asset moments and log(V^t/C^t) diverge as σz increases. As before, nonlinear perturbation is unable to compute asset prices and log(V^t/C^t) for high σz, but the direct local approximations ameliorate the problem. However, in the most extreme case of σz = 0.04, even the direct perturbation and projection show moderate discrepancies for asset prices, and for both values of σz the means of log(V^t/C^t) are quite different.[3]

The inability of the local method to approximate log(V^t/C^t) is crucial for welfare analysis. For example, to compute the welfare costs of TFP volatility, one would simply evaluate the difference log(Vtl/Ctl)log(Vth/Cth), where Vtl and Ctl are computed under low volatility and Vth and Cth are computed under high volatility. The resulting value is interpreted as the percentage change in the agent’s utility (as a fraction of consumption) as volatility changes. These differences are easily computed from the values reported in Table 3 and Table 4: according to Chebyshev projection, a one percent increase in TFP volatility from σz = 0.03 to σz = 0.04 results in a welfare loss of 1.44 − 0.561 = 0.879, while the analogous computation due to perturbation is 0.663 + 1.38 = 2.04 – more than twice the value of the global method.

We will see that the findings in this section are a result of the fact that perturbation is a local approximation around the deterministic steady state (σz = 0), and that the value function exhibits a high degree of curvature in the direction of σz. For this reason, perturbation has difficulty achieving an accurate approximation as the calibrated value of σz moves away from zero. Further, although we do not consider values σz > 0.04, we note that projection methods will never result in negative approximations (and hence, explosive paths) of the value function, as does the perturbation, for any value of σz, since they are able to maintain tight control of the radius of convergence. This point is emphasized by Den Haan and de Wind (2009).

4.3 Graphical evidence

We now provide graphical evidence to support the model solutions of Section 4.2. Figure 1 shows policy function approximations for 5th order Chebyshev projection and perturbation of orders 1, 2 and 3, all for the case of σz = 0.04 and β = 0.998 (for C^t and V^t, the “Pert” and “NPert” solutions are identical). Figure 2 and Figure 3 depict similar approximations for the value and bond price functions.

Figure 1: Consumption policy approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.
Figure 1:

Consumption policy approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.

Figure 2: Value function approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.Note the widely different scales on the vertical axes.
Figure 2:

Value function approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.

Note the widely different scales on the vertical axes.

Figure 3: Bond price function approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.Note the widely different scales on the vertical axes.
Figure 3:

Bond price function approximations for Chebyshev projection (5th order) and perturbation of orders 1, 2 and 3, where σz = 0.04 and β = 0.998.

Note the widely different scales on the vertical axes.

These plots clarify the results reported in Table 4: both projection and perturbation produce policy functions that are in close agreement, while there is a wide discrepancy in their solutions for the value and bond price functions, even among the different order perturbations. Since the quantity dynamics of the solutions are not sensitive to the value function, it is not surprising that the simulated macroeconomic moments of the two methods do not differ by a great amount. However, the risk-free rate and log(V^t/C^t) both depend directly on the level and shape of the value function, and hence are quite different across methods. Figure 4Figure 6 depict the same approximations for the case of σz = 0.01, and demonstrate that when the TFP volatility is low, the solution methods are more similar, as expected from the simulation output in Table 3. In this case, the consumption policy approximations overlap to an even greater extent, the value function approximations are separated by only a (relatively) small level shift and the bond price functions are shifted closer together and also overlap more. The discrepancies in the value and bond price functions further diminish as we shrink σz toward zero.

We note that in our model it is possible to analytically solve for the return on equity in terms of aggregate variables (see C), and hence it is unaffected by poor local approximations of the value function. It follows that local approximations of the equity premium are only affected by the value function via the risk-free rate. This result may not extrapolate to more general models where analytical expressions of the return on equity are not available.

Figure 4: Consumption policy approximations for Chebyshev projection and perturbation of orders 1, 2 and 3, where σz = 0.01 and β = 0.998.
Figure 4:

Consumption policy approximations for Chebyshev projection and perturbation of orders 1, 2 and 3, where σz = 0.01 and β = 0.998.

Figure 5: Value function approximations for Chebyshev projection and perturbation of orders 2 and 3, where σz = 0.01 and β = 0.998.
Figure 5:

Value function approximations for Chebyshev projection and perturbation of orders 2 and 3, where σz = 0.01 and β = 0.998.

Figure 6: Bond price function approximations for Chebyshev projection and perturbation of orders 2 and 3, where σz = 0.01 and β = 0.998.
Figure 6:

Bond price function approximations for Chebyshev projection and perturbation of orders 2 and 3, where σz = 0.01 and β = 0.998.

To understand why the value functions for the two solution methods diverge for large σz, it is useful to think of the value function and consumption policy as functions of both the state variable, K^, and the TFP volatility parameter, σz. Figure 7Figure 9 show policy, value and bond price function approximations for 5th order Chebyshev projection and 3rd order perturbation, when σz ∈ [0, 0.04]. Thus, the approximations in Figure 1Figure 6 are simply cross sections (fixing σz) of the functions depicted in Figure 7Figure 9; e.g. the value function in panel (a) of Figure 2 corresponds to the cross-section of Figure 8 at σz = 0.04. It becomes apparent from inspecting these surfaces that the value function exhibits a high degree of curvature in the direction of σz, with the amount of curvature increasing as σz approaches zero, whereas the consumption policy is quite flat. As a result, similar to the square root function considered in 53, we anticipate that the radius of convergence of a Taylor polynomial approximation of the value function will diminish for approximations centered at points very close to σz = 0. This is a exactly what a perturbation solution is: a local Taylor approximation atσz = 0. It follows that, for the particular case of the value function, we have no guarantee of convergence for values of σz far away from zero, and in fact we should not be surprised to see divergent behavior, as suggested by the previous plots. This result is congruent with Den Haan and de Wind (2009) who find that nonlinearities in DSGE models can render high-order perturbation solutions that are explosive. On the other hand, the radius of convergence for the consumption policy is likely to be quite large, and we expect local Taylor approximations to converge for a wide range of σz – this is corroborated by Figure 7, where we are unable to distinguish the two surfaces. We are careful to note, however, that due to the nonlinearity and numerical difficulty of the problem, it is not possible to derive the true radius of convergence, and hence we cannot be certain that it is the cause of the large numerical discrepancies in the local approximation; rather, we simply use the nonlinearities of the approximated surfaces as a guide, suggesting the radius of convergence might be responsible.

Figure 7: Consumption policy approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].
Figure 7:

Consumption policy approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].

Figure 8: Value function approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].The lower surface corresponds to perturbation.
Figure 8:

Value function approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].

The lower surface corresponds to perturbation.

Figure 9: Bond price function approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].The lower surface corresponds to perturbation.
Figure 9:

Bond price function approximations for 5th order Chebyshev projection and 3rd order perturbation, for σz ∈ [0, 0.04].

The lower surface corresponds to perturbation.

The upshot of the foregoing results is that a global projection method is more robust to value function curvature, since it seeks to minimize an error equation expressed in terms of the true (unknown) value function, rather than approximating the truth at a distant focal point.

4.4 Solution evaluation and sensitivity analysis

In the preceding analysis we have merely shown some conditions under which the two solution methods we consider are different; we have not formally investigated their relative accuracy. We now undertake the important task of determining which of the solutions is a closer approximation to the unknown truth and do so for a variety of model parameter values. While our primary evaluation criterion will be Euler equation errors, we will conclude the section with a discussion of the Den-Haan-Marcet statistic (Den Haan and Marcet, 1994).

4.4.1 Pricing errors

The fundamental asset pricing equation is 1=Et[Mt+1Rt+1], where Mt+1 is the time t stochastic discount factor, and Rt+1 is the return for any asset between t and t + 1. Thus, from Equation (18) we have

(33)1=Et[Mt+1Rt+1e]=Et[Mt+1ϕ(I^tK^t)((α1)Y^t+1+C^t+1K^t+1+ϕ(I^t+1K^t+1)+1δϕ(I^t+1K^t+1))].

Since the stochastic discount factor Mt+1 incorporates current consumption, C^t, in its denominator [see Equation (14)], we can interpret pricing errors as a fraction of contemporaneous consumption. As suggested by Judd and Guu (1997), Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006), and Caldara et al. (2012), base 10 logarithms of pricing errors in Equation (33) can be interpreted in the following manner: a value of −1 corresponds to a 10% consumption error, a value of −2 corresponds to a 1% consumption error, a value of −3 corresponds to a 0.1% consumption error, etc.. Combining Equation (33) with the long simulations of εt [see Equation (20)] used in Section 4.2, we can compute the mean of the pricing errors implied by the model, for each solution method. The expectation is approximated by a Gauss-Hermite quadrature rule, with the order chosen so as to exactly compute the integral for the finite polynomial solutions. The pricing errors are reported graphically in the upper rows of Figure 10Figure 13. The individual plots depict how mean Euler equation errors vary with β, where β ∈ [0.980, 0.998] – in general, perturbation solution quality degrades as β rises. Moving across the upper rows, from left to right, we are then able to observe the effect of increasing risk aversion, γ, and moving between the four figures we observe the effect of increasing TFP volatility, σz – as with β, the quality of the perturbation solution degrades as each of these parameters increases. At the lower extreme, in Figure 10, when σz = 0.01 and γ = 2, a 3rd order perturbation dominates a 5th order projection for virtually all values of β that we consider. However, both solutions produce errors that most would consider economically insignificant (less than 0.01% of consumption). Holding σz fixed and increasing γ, the perturbation errors rise to levels as high as 1% of consumption, for high values of β. These qualitative results become more pronounced in Figure 11Figure 13, where at the upper extreme (σz = 0.04 and γ = 10), perturbation errors exceed 10% of consumption, for high values of β. It is this final case that deserves particular attention: models that calibrate to annual, pre-war data generally require high values of σz (on the order of 0.04) and β (on the order of 0.998 or above) in order to match output volatility and the level of the risk-free rate. We see that these calibrations, matched with moderate levels of risk aversion (above 5) can lead to poor local approximations. On the other hand, models that calibrate to quarterly, post-war data typically obtain much smaller values of σz (on the order of 0.01 or below), and do not suffer from poor local approximations.

Figure 10: Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.σz = 0.01 in all cases.
Figure 10:

Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.

σz = 0.01 in all cases.

Figure 11: Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.σz = 0.02 in all cases.
Figure 11:

Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.

σz = 0.02 in all cases.

Figure 12: Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.σz = 0.03 in all cases.
Figure 12:

Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.

σz = 0.03 in all cases.

Figure 13: Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.σz = 0.04 in all cases.
Figure 13:

Mean log10 Euler equation errors (first row), mean risk-free rate (second row) and mean log ratio of value function to consumption policy, plotted as functions of β for different values of γ.

σz = 0.04 in all cases.

The fundamental characteristic driving these results is the curvature of the value function with respect to TFP volatility: as shown in Figure 8, the value function can exhibit a high degree of curvature in the direction of σz. In cases where the curvature is extreme, a local method such as perturbation will have difficulty approximating the function at points far from the deterministic steady state (the locus of approximation), a result which is corroborated by Figure 10Figure 13. This is especially relevant for models that require a high calibrated value of σz. The parameters γ and β have an effect insofar as they increase the sensitivity of the value function to changes in σz; i.e. the sensitivity increases with each of these parameters.

The second and third rows of Figure 10Figure 13 depict the mean of Rtf and the mean of log(V^t/C^t), respectively, across simulations. As in Section 4.2, we compute these values both nonlinearly, via Equations (14), (27) and (28), and directly, via Equations (29) and (30). As with the Euler equation errors, the nonlinear perturbation risk-free rate deviates dramatically from that of projection as β, γ and σz rise. This discrepancy is most pronounced for σz = 0.04, γ ≥ 5 and β ≥ 0.99. On the other hand, the direct perturbation computation appears to be quite similar to projection for all parameter values. In truth, the local method exhibits a small amount of divergent behavior as well, but the graphical evidence is washed out by the scale of the nonlinear deviation. The moments in Table 3 and Table 4 give an idea of the magnitude of divergence.

The reason for the discrepancy in the risk-free rate computations is the same as for the Euler equation errors: for high values of β, γ and σz, perturbation provides a poor approximation to the value function. Since the risk-free rate depends directly on the value function in models with recursive utility [see Equations (27) and (14)], it is likewise poorly approximated by perturbation, insofar as the value function approximation is poor. This effect is most severe when we compute the risk-free fate nonlinearly. Viewed from the opposite perspective, if our approximations for C^t and V^t were highly accurate, a nonlinear computation of Rtf would likewise be highly accurate. Thus, the large deviations in Figure 11Figure 13 are just further evidence of the poor local approximation of V^t for high β, γ and σz. The interesting aspect of our results, though, is that we can ameliorate the effect of the value function by directly computing the risk-free rate via a Taylor expansion. This latter method anchors the risk-free at its deterministic steady state and weakens the computational relationship between Rtf and V^t.

The final rows of Figure 10Figure 13 lend more insight to the foregoing results. As mentioned in Section 4.2, we include the log of V^t/C^t in our analysis, since this variable is instrumental in welfare evaluations. Once again, the approximations deteriorate as β, γ and σz increase, which we attribute to the poor local approximation of the value function. However, in the case of log(V^t/C^t) the deviations are more severe (for the direct Taylor expansion method) than for the risk-free. The reason for this is that the risk-free rate depends on the value function in both numerator and denominator [see Equations (14) and (27)], which mitigates the error propagation of the value function approximation. The same could be true of welfare computations log(V^t/C^t)log(V^t/C^t) when V^t and V^t are computed with the same σz (i.e. welfare effects are evaluated for a variable other than σz). However, since the approximations of V^t are likely to be very different for different values σz (it will be well approximated for low volatility and badly approximated for high volatility), we expect that welfare computations attributed to TFP volatility will be very poorly approximated.

As a final note, we repeated all of the previous analysis for μ = 0 and ψ = 0.5; these changes only caused slight shifts, leaving the results above qualitatively the same. For space considerations, we do not include them. We recognize, however, the particular interplay between β and μ: lowering the growth rate increases the model’s ability to tolerate high values of β before local approximations become very poor. That is, we can think of the model as depending on the single growth adjusted subjective discount factor β=βexp(μ) – lowering μ or β effectively decreases β*, and hence the model’s sensitivity to σz.

4.4.2 Den-Haan-Marcet statistic

Since agents in our model have rational expectations, the residual of the pricing equation,

(34)ut+1=1Mt+1Rt+1e

should not be in the time t information set. That is, under the null hypothesis that we have correct solutions for the value and policy functions, β = 0 in regressions of the form

(35)ut+1=i=1nβixi,t+ζt+1,

for t = 1, 2, … , T, where xi,t represent variables in the time t information set. Den Haan and Marcet (1994) suggest testing this hypothesis by constructing a Wald-type statistic

(36)DM(n)=uX[t=1Txtxtζ^t+12]1Xu

where xt is the vector of time t regressors, x1,t, x2,t, …, xn,t, X is the matrix with rows xt, and ζ^t+1=ut+1xtβ^. Under the null hypothesis, DM(n)aχ2(n); however, since the probability of attaining the true solution is zero, we expect that large values of T will force a rejection of the test. To account for this, Den Haan and Marcet (1994) compute DM(n) for multiple simulations of u and determine the proportion of times that the statistic falls within certain critical limits of the χ2(n) distribution. If the approximate solutions are good, the proportions within these bounds should be close to the actual area under the χ2(n) density function.

In our implementation of the Den Haan-Marcet statistic, we regress the price residuals on a constant and five lags of both log consumption growth and log productivity growth (hence, n = 11). We fix β = 0.998 and simulate 500 data sets of T = 3000 quarterly observations (750 years of data) and report the proportion of time that the value from Equation (36) is above or below the 5% points of the χ2(11) density in Table 5. This Wald-type diagnostic corroborates the main result of the paper: in all cases the global Chebyshev projection method provides a very accurate solution to the model, whereas a high-order perturbation is only adequate for small values of the TFP volatility or where other model parameters (such as γ) eliminate the sensitivity of the value function to σz.

Table 5:

Den Haan-Marcet statistics, computed for 500 simulations of T = 3000 quarterly observations.

σz = 0.01σz = 0.04
γ = 2γ = 5γ = 10γ = 2γ = 5γ = 10
Projection(0.052, 0.054)(0.052, 0.052)(0.050, 0.052)(0.052, 0.056)(0.054, 0.062)(0.050, 0.07)
Perturbation(0.058, 0.052)(0.050, 0.058)(0.006, 0.338)(0, 0)(0, 0)(0, 0)
  1. The numbers in the parentheses represent the proportion of times the statistic was below and above, respectively, the 5% and 95% percent points of the χ2(11) density. In all cases, β = 0.998.

5 Conclusion

We have shown that the choice of solution method can be critical for production-based asset pricing models with recursive utility. In particular, local perturbation methods are inadequate for such models concerned with asset prices and welfare costs when TFP volatility is calibrated at high levels and when the risk aversion and the discount factor parameters are sufficiently high to make the value function very sensitive to TFP volatility. A global projection method, on the other hand, does quite well under a variety of circumstances. The reason for this result is that the value function in our model is highly curved in the direction of TFP volatility, σz. In fact, the degree of curvature is high enough that a local Taylor approximation of the value function is only suitable over a very small region around the point of approximation. Since perturbation is equivalent to a local Taylor expansion around the deterministic steady state (σz = 0), we find that in certain cases the resulting solution diverges for large σz, even for high-order approximations. A global approximation method, however, such as Chebyshev projection, is not susceptible to these issues, since it seeks to minimize an error function at any desired level of the TFP volatility.

We show that the parameter choice and model calibration is pivotal to the results above. For models that calibrate to quarterly, post-war data, typical values of σz and β are low enough to eliminate value function sensitivity to TFP volatility, rendering perturbation solutions perfectly acceptable. Caldara et al. (2012) compare perturbation and projection methods for such parameterizations and demonstrate that both are adequate. Our results diverge from those of Caldara et al. (2012) when we consider parameter values that are relevant for models which calibrate to annual, pre-war data. For these latter parameterizations, the quality of high-order perturbation methods degrade.

Local approximations of asset prices can be improved by augmenting the system of perturbation conditions and directly computing expansions for the risk-free rate. This method of approximation weakens the dependency of bond prices on the poorly approximated value function, and results in a more accurate solution. The same is less true of welfare costs: while very small improvements are also observed from direct computations, they are not nearly as striking.

Although we have not reported the stochastic steady state distribution of capital due to space considerations, as with many macroeconomic models, we find that the distribution is quite far from the deterministic steady-state value, K^ss. One consideration is to incorporate this information in the perturbation approximation by expanding the Taylor polynomial around the mean of the steady state distribution, rather than K^ss. However, we emphasize that while the value function is highly curved in the direction of σz, it is relatively linear in the direction of the K^ss, and hence a first-order shift in the capital direction is likely to have little effect on a problem that is caused in the volatility direction. Similar expansions around different values of σz are not possible, as analytic expressions for the derivatives of the perturbation system can only be found at the deterministic steady state (σz = 0).

In contrast to the value function, the consumption policy is relatively linear in the direction of σz and more curved in the direction of capital. As a result, a local Taylor approximation converges over a large range of values of σz, and perturbation has little difficulty in providing good approximations of the endogenous choice variable. For this reason, our results would have little bearing on the practitioner who is only interested in macroeconomic quantities: since quantity dynamics are not sensitive to the value function, perturbation delivers adequate solutions, even for fairly large values of σz. Aruoba, Fernández-Villaverde, and Rubio-Ramírez (2006) only consider quantity dynamics and find that perturbation is competitive with global solution methods.

Our results are important for individuals who are jointly interested in quantity dynamics and other variables such as asset prices and welfare costs. Since asset prices in a recursive utility model depend crucially on the value function, our choice of solution method has an important impact on their moments (risk-free rates, risk premia, their volatilities, etc.) insofar as the method improves the value function approximation. The same is true of other variables that are tightly linked to the value function, such as welfare costs. While we don’t emphasize our particular model as a solution to the joint problem of matching macroeconomic and asset pricing data, we feel that extensions of the model have great potential, and that the problems we have uncovered are likely to be present in other production-based models with recursive utility.

Our general caution is for practitioners to be aware of the potential disadvantages of a local approximation method and, when feasible, to compare it to a global method to ensure adequacy. While we find that Chebyshev projection is competitive with perturbation in terms of computing time for the case of a single state variable, such is not likely to be true of models with many more state variables; as the number of variables increases, a global method will suffer from the curse of dimensionality. In cases such as these (see, for example, Rudebusch and Swanson (2012)) perturbation has the benefit of computational simplicity and, hence, is a natural candidate for an estimation procedure. However, for models where perturbation cannot adequately approximate the value function, and where financial moments or welfare costs are of interest, no degree of computational simplicity can compensate for an incorrect solution. For this reason, we suggest using perturbation in cases where solution adequacy can be verified against a more robust benchmark.

A Projection algorithm pseudocode

Our projection algorithm proceeds in the following manner:

1:Set τ = 0.00000001, Δ = 1, l = 0 and a0 = 1.
2:while Δ > τdo
3:     fori = 1 to Nkdo
4:          Solve V^i=maxC^{(1β)C^1γθ+β(E(K^i,C^))1θ}θ1γ
5:          forj = 1 to Nεdo
6:Z^j=exp(μ+σzεj)K^i,j(C^)=1Z^j((1δ)K^i+ϕ(K^iαC^K^i)K^i)
          and
Ψ(K^i,C^,εj)=Φ(K^i,j(C^))al.
7:          end for
8:E(K^i,C^)=j=1NεωjZ^j1γΨ(K^i,C^,εj)1γ,
          and where ωj, j=1,2,,Nε, are the Gauss-Hermite quadrature weights. Denote the argmax by C^i. Clearly, Ψ(K^i,C^,εj) is an approximation of V^(K^), given K^i, C^ and εj, and E(K^i,C^) is an approximation of Et[Z^1γV^(K^)1γ], given K^i and C^.
9:     end for
10:     Update the coefficients by solving the linear system
al+1=(Φ(K^)TΦ(K^))1Φ(K^)TV^,
     where V^ is the vector comprised of V^i, i=1,2,,Nk.
11:     Set Δ=max{Φ(K^)al+1Φ(K^)al} and l = l + 1.
12:end while
13:Solve for the coefficients of the consumption policy approximant
bl+1=(Φ(K^)TΦ(K^))1Φ(K^)TC^,
where C^ is the vector comprised of C^i, i=1,2,,Nk.

The maximization step in line 4 of the algorithm can be performed in a variety of ways; we use a binary search method that exploits the monotonicity of the value function with respect to C^ (see D). We also speed the algorithm with a Howard improvement step, performing the maximization in line A only when l − 100⌊l/100⌋ = 0 (that is, when l modulo 100 is zero) and otherwise computing V^i by substituting the contemporaneous value of C^i. The resulting polynomial approximations are

(37)V~proj(K^)=Φ(K^)al

and

(38)C~proj(K^)=Φ(K^)bl.

Note that the polynomial approximation for consumption, computed in line 13, is a byproduct of the solution and is not used within the algorithm to obtain the solution. For our particular implementation of the projection algorithm, we use Chebyshev basis functions and their collocation points; this method allows us to choose the Nk values of K^ so that the interpolation errors are uniformly minimized and so that Nk = M. The latter property results in a square polynomial matrix Φ(K^), which allows the coefficients in lines 10 and 13 to be computed via a simple matrix inversion, Φ(K^)1, rather than (Φ(K^)TΦ(K^))1Φ(K^)T. We find that a value as low as M = 6 (an order 5 polynomial) provides accurate solutions to the problem.

B Adjustment cost parameters

We define the parameters α1 and α2 of the adjustment cost function. We want to specify a function that does not impose adjustment costs in the deterministic steady state; i.e. a function that satisfies

(39)ϕ(xss)=xss

and

(40)ϕ(xss)=1,

where xss=K^ssαC^ssK^ss. First, in the deterministic steady state, Equation (19) becomes

(41)K^ss=1Z^ss((1δ)K^ss+ϕ(K^ssαC^ssK^ss)K^ss)xss=ϕ1(Z^ss1+δ).

Since ϕ(x)=α1x1/ξ, Equation (40) is satisfied if α1=xss1/ξ. Substituting this value for α1, Equation (39) is then satisfied if

α2=xssα111/ξxss11/ξ=xss111/ξxss=1/ξ11/ξxss=11ξxss.

Combining Equations (39) and (41), it’s clear that

xss=Z^ss1+δ=exp(μ)1+δ,

from which we conclude

(42)α1=(exp(μ)1+δ)1/ξ

and

(43)α2=11ξ(exp(μ)1+δ).

C Derivation of return on equity

The Lagrangian for the firm’s problem is:

(44)max{It,Kt+1,Ht}E0[t=0Mt+1{(ZtHt)1αKtαWtHtIt+μt(ϕ(ItKt)Kt+(1δ)KtKt+1)}].

The first order condition with respect to It is

1+μtϕ(ItKt)=0,

which implies,

(45)μt=1ϕ(ItKt).

The first order condition with respect to Kt+1 is

(46)μt+Et[Mt+1α(Zt+1Ht+1)1αKt+1α1]+Et[Mt+1μt+1((1δ)ϕ(It+1Kt+1)It+1Kt+1+ϕ(It+1Kt+1))]=0.

Using Equation (45) we substitute for ϕ(It+1/Kt+1) in Equation (46) and rearrange to get

(47)μt=Et[Mt+1{α(Zt+1Ht+1)1αKt+1αIt+1Kt+1+μt+1(ϕ(It+1Kt+1)+1δ)}].

Substituting for μt and μt+1, and recognizing Yt=(ZtHt)1αKtα and It=YtCt, we obtain

(48)1=Et[Mt+1ϕ(ItKt)((α1)Yt+1+Ct+1Kt+1+ϕ(It+1Kt+1)+1δϕ(It+1Kt+1))]
(49)=Et[Mt+1Rt+1I],

where

(50)Rt+1I=ϕ(ItKt)((α1)Yt+1+Ct+1Kt+1+ϕ(It+1Kt+1)+1δϕ(It+1Kt+1)).

Equation (49) is the standard Euler condition for the return on investment, Rt+1I. Moreover, since the production technology and adjustment costs satisfy constant returns to scale, Restoy and Rockinger (1994) prove Rt+1E=Rt+1I, where Rt+1E is the unlevered return on equity.

D Maximization algorithm

We present the binary search algorithm that we use to select the optimal consumption value in line (A) of the projection algorithm. This method exploits the monotonicity of the value function with respect to C^t and converges very quickly. It is performed for each value in the capital grid, K^, at each step in the value function iteration where maximization is performed (non-Howard steps).

1:Set τc = 0.000001, εc=τc10, Δc = 1, lc=0, cmin = 0 and cmax=K^iα.
2:while Δc > τcdo
3:      c1=cmax+cmin2 and c2=c1+εc
4:      form = 1 to 2 do
5:            C^=cm
6:            steps A – A of the projection algorithm
7:            vi={(1β)C^1γθ+β(Exp(K^i,C^))1θ}θ1γ
8:      end for
9:      ifv1 > v2then
10:            cmax = c1
11:      else
12:            cmin = c2
13:      end if
14:      Δc = cmaxcmin
15:end while
16:C^i=c1.

D.1 Radius of convergence: motivating example

We consider the very simple example of approximating the square root function, f(x)=x, with a Taylor polynomial of various orders[4]. Judd (1998) provides similar numerical results for x1/4. Generally speaking, any (analytic) continuously differentiable function can be written as,

(51)f(x)=i=0f(i)(x0)i!(xx0)i,

for x in a neighborhood of x0, where f(i) denotes the ith derivative of f and where f(0) = f. Applying this result to the square root function,

(52)x=x0+i=1(1)i+1(2i3)!!2ii!x012i(xx0)i,

where n!! denotes the double factorial of n[5]. For a general power series of the form i=1ai(xx0)i, the radius of convergence is defined as the value rR¯+ such that (52) converges for |xx0|r; that is, the radius of convergence identifies the neighborhood for which the function converges. A simple way to determine the radius of convergence is

(53)r=limi|aiai+1|.

Hence, for the square root function (assuming x0 > 0),

(54)rsq(x0)=limi|(1)i+1(2i3)!!2ii!x012i(1)i+2(2i1)!!2i+1(i+1)!x012i1|=limi2(i+1)2i1x0=x0.

Equation (54) states that the Taylor series expansion of the square root function around x0 is only guaranteed to converge for x ∈ (0, 2x0). Outside of this range, the series expansion will diverge.

Intuitively, the radius of convergence for a Taylor series depends on the rate at which the derivatives of the target function diminish at the point of approximation. For a function with little shape, the high-order derivatives drop quickly to zero, forcing r to be quite high – the extreme case being a polynomial of finite order, with an infinite radius of convergence (the Taylor series converges for all xR). Conversely, in cases where the high-order derivatives do not exhibit quick decay, the radius of convergence is low, resulting in a Taylor series expansion which is only applicable over a small portion of the domain[6].

To illustrate this concept, we approximate the square root function at two values: x0 = 1 and x0 = 2. In the first case the radius of convergence is 1 and we anticipate that a Taylor series approximation will only be appropriate in the range (0, 2). The blue lines in Figure 14 depict the first nine Taylor polynomial approximations of f(x)=x around x0 = 1. Clearly, the polynomial approximations are adequate for x(0,2), but diverge outside of that range; while increasing the order of approximation to arbitrary levels allows us to fit the function at any desired level of precision over the interval (0, 2), the approximations become erratic outside of that interval for high orders. In the second case, the radius of convergence is 2, indicating that the Taylor series will converge on the interval (0, 4). The red lines in Figure 14 depict the first nine Taylor polynomial approximations around x0 = 2, confirming our prior intuition.

Figure 14: First nine Taylor polynomial approximations of $\sqrt{x}$x around x0 = 1 (blue) and x0 = 2 (red).
Figure 14:

First nine Taylor polynomial approximations of x around x0 = 1 (blue) and x0 = 2 (red).

This example illustrates that there is an inverse relationship between the degree of curvature of a function at a point of interest and the size of the interval (around that point) over which a Taylor polynomial approximation is adequate.

References

Aruoba, S. B., J. Fernández-Villaverde, and J. F. Rubio-Ramírez. 2006. “Comparing Solution Methods for Dynamic Equilibrium Economies.” Journal of Economic Dynamics and Control 30: 2477–2508.10.1016/j.jedc.2005.07.008Search in Google Scholar

Bansal, R., and A. Yaron. 2004. “Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles.” The Journal of Finance LIX: 1481–1509.10.3386/w8059Search in Google Scholar

Bansal, R., D. Kiku, and A. Yaron. 2007. “Risks For the Long Run: Estimation and Inference.” Working Paper.Search in Google Scholar

Brumm, J., and S. Scheidegger. 2015. “Using Adaptive Sparse Grids to Solve High-Dimensional Dynamic Models.” Working Paper, 1–39.Search in Google Scholar

Cai, Y., K. L. Judd, and J. Steinbuks. 2015. “A Nonlinear Certainty Equivalent Approximation Method for Dynamic Stochastic Problems.” Working Paper, 1–61.10.3386/w21590Search in Google Scholar

Caldara, D., J. Fernández-Villaverde, J. F. Rubio-Ramírez, and W. Yao. 2012. “Computing DSGE Models with Recursive Preferences and Stochastic Volatility.” Review of Economic Dynamics 15: 188–206.10.1016/j.red.2011.10.001Search in Google Scholar

Campanale, C., R. Castro, and G. L. Clementi. 2010. “Asset Pricing in a Production Economy with Chew-Dekel Preferences.” Review of Economic Dynamics 13: 379–402.10.1016/j.red.2009.06.005Search in Google Scholar

Croce, M. M. 2013. “Welfare Costs in the Long Run.” Working Paper.10.2139/ssrn.2060997Search in Google Scholar

Den Haan, W. J., and A. Marcet. 1994. “Accuracy in Simulations.” The Review of Economics Studies 61: 3–17.10.2307/2297873Search in Google Scholar

Den Haan, W. J., and J. de Wind. 2009. “How Well-Behaved are Higher-Order Perturbation Solutions?” Working Paper.Search in Google Scholar

Epstein, L. G., and S. E. Zin. 1989. “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework.” Econometrica 57: 937–969.10.1142/9789814417358_0012Search in Google Scholar

Fernández-Villaverde, J., and O. Levintal. 2016. “Solution Methods for Models with Rare Disasters.” Working Paper, 1–37, http://www.nber.org/papers/w21997.10.3386/w21997Search in Google Scholar

Fernández-Villaverde, J., G. Gordon, P. Guerrón-Quintana, and J. F. Rubio-Ramírez. 2015. “Nonlinear Adventures at the Zero Lower Bound.” Journal of Economic Dynamics and Control 57: 182–204.10.1016/j.jedc.2015.05.014Search in Google Scholar

Irarrazabal, A., and J. C. Parra-Alvarez. 2015. “Time-Varying Disaster Risk Models: An Empirical Assessment of the Rietz-Barro Hypothesis.” Working Paper, 1–46.10.2139/ssrn.2559074Search in Google Scholar

Jermann, U. J. 1998. “Asset Pricing in Production Economies.” Journal of Monetary Economics 41: 257–275.10.1016/S0304-3932(97)00078-0Search in Google Scholar

Judd, K. L. 1998. Numerical Methods in Economics. Cambidge, MA: MIT Press.Search in Google Scholar

Judd, K. L., and S.-M. Guu. 1997. “Asymptotic Methods for Aggregate Growth Models.” Journal of Economic Dynamics and Control 21: 1025–1042.10.1016/S0165-1889(97)00015-8Search in Google Scholar

Judd, K. L., L. Maliar, and S. Maliar. 2011. “Numerically Stable and Accurate Stochastic Simulation Approaches for Solving Dynamic Economic Models.” Quantitative Economics 2: 173–210.10.3982/QE14Search in Google Scholar

Judd, K. L., L. Maliar, S. Maliar, and R. Valero. 2014. “Smolyak Method for Solving Dynamic Economic Models: Lagrange Interpolation, Anisotropic Grid and Adaptive Domain.” Journal of Economic Dynamics and Control 44: 92–123.10.3386/w19326Search in Google Scholar

Kaltenbrunner, G., and L. Lochstoer. 2008. “Long-Run Risk through Consumption Smoothing.” Working Paper.10.2139/ssrn.965702Search in Google Scholar

Levintal, O. 2016. “Taylor Projection: A New Solution Method to Dynamic General Equilibrium Models.” Working Paper, 1–62.10.2139/ssrn.2728858Search in Google Scholar

Maliar, L., and S. Maliar. 2015. “Merging Simulation and Projection Approaches to Solve High-Dimensional Problems with an Application to a New Keynesian Model.” Quantitative Economics 6: 1–47.10.3982/QE364Search in Google Scholar

Mehra, R., and E. C. Prescott. 1985. “The Equity Premium: A Puzzle.” Journal of Monetary Economics 15: 145–161.10.1016/0304-3932(85)90061-3Search in Google Scholar

Parra-Alvarez, J. C. 2017. “A Comparison of Numerical Methods for the Solution of Continuous-Time DSGE Models.” Macroeconomic Dynamics 22: 1555–1583.10.1017/S1365100516000821Search in Google Scholar

Pohl, W., K. Schmedders, and O. Wilms. 2015. “Higher-Order Effects in Asset-Pricing Models with Long-Run Risks.” Working Paper, 1–57.10.1111/jofi.12615Search in Google Scholar

Restoy, F., and G. M. Rockinger. 1994. “On Stock Market Returns and Returns on Investment.” The Journal of Finance XLIX: 543–556.10.1111/j.1540-6261.1994.tb05151.xSearch in Google Scholar

Rudebusch, G. D., and E. T. Swanson. 2012. “The Bond Premium in a DSGE Model with Long-Run Real and Nominal Risks.” American Economic Journal: Macroeconomics 4: 105–143.10.1257/mac.4.1.105Search in Google Scholar

Schmitt-Grohé, S., and M. Uribe. 2004. “Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function.” Journal of Economic Dynamics and Control 28: 755–775.10.1016/S0165-1889(03)00043-5Search in Google Scholar

Weil, P. 1989. “The Equity Premium Puzzle and the Risk-Free Rate Puzzle.” Journal of Monetary Economics 24: 401–421.10.1016/0304-3932(89)90028-7Search in Google Scholar

Weil, P. 1990. “Nonexpected Utility in Macroeconomics.” The Quarterly Journal of Economics 105: 29–42.10.2307/2937817Search in Google Scholar

Published Online: 2019-12-14

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.5.2023 from https://www.degruyter.com/document/doi/10.1515/snde-2017-0003/html?lang=en
Scroll to top button