Disentangling Permanent and Transitory Monetary Shocks with a Nonlinear Taylor Rule


 This article provides an estimation method to decompose monetary policy innovations into persistent and transitory components using the nonlinear Taylor rule proposed in Andolfatto, Hendry, and Moran (2008) [Are inflation expectations rational? Journal of Monetary Economics, 55, 406–422]. To use the Kalman filter as the optimal signal extraction technique, we use a convenient reformulation for the state equation by allowing expectations to play a significant role in explaining the future time evolution of monetary shocks. This alternative formulation allows us to perform the maximum likelihood estimation for all the parameters involved in the monetary policy as well as to recover conditional probabilities of regime change. Empirical evidence on the US monetary policy making is provided for the period covering 1986-Q1 to 2021-Q2. We compare our empirical estimates with those obtained based on the particle filter. While both procedures lead to similar quantitative and qualitative findings, our approach has much less computational cost.


Introduction
State space models are useful for many economic applications. As it is wellknown, under normality, the classical Kalman filter provides the minimum-variance estimate of the current state taking into account the most recent signal. This prediction is just the conditional expectation. However, under non-linearity and/or non-normality, the filtering procedure developed by Kalman (1960) becomes non-optimal. Two alternatives has been developed in the literature to deal with this aspect: a) the use of first order Taylor series expansion to get linearized equations (transition and/or observation) and b) the use of simulations techniques based on sequential estimation of conditional densities through lot of replications. The first alternative leads to biased estimators. As to the second approach, the seminal papers of Fernández-Villaverde and Rubio-Ramirez (2005 and2007) show how to deal with the likelihood-based estimation of non-linear DSGE models with non-normal shocks using a sequential Monte-Carlo method (particle filter). This procedure requires a heavy computational burden.
This paper rethinks about the non-optimality of the Kalman filter by revisiting the signal extraction problem proposed in Andolfatto et al. (2008). These authors consider a non-linear Taylor rule where regime shifts reflect the updating of the central bank's inflation target. Such rule could be useful not only to analyze monetary policy making through the lens of a Taylor rule but also in the context of New-keynesian models that incorporate imperfect monetary policy credibility and/or changes in the Central Bank's inflation target (see, for example, Kozicki and Tinsley, 2005, Ireland, 2007, Coogley et al., 2010, Aruoba and Schorfheide, 2011and Milani and Treadwell, 2012. The paper contributes to the literature by providing an optimal use of the Kalman filter to estimate persistent and transitory monetary shocks when permanent shifts in the inflation target takes place. Therefore, we focus on how to estimate a Taylor rule where central banks' smoothing of interest rates is time varying as a consequence of timevarying inflation targeting. We consider a new state-space representation requires the use of state-contingent matrices and expectations play a significant role in monetary policy making. Our 2 procedure has two clear advantages over the standard particle filter: a) the possibility of performing a maximum likelihood estimation of the parameters involved in the monetary policy, and therefore, the estimation of conditional time-varying probabilities of regime switching, b) a remarkable lower computational cost. Moreover, it could be incorporated into simulation algorithms for DSGE models in a straightforward manner.
In order to provide an empirical comparison between our estimation procedure and the particle filter we estimate permanent and transitory monetary shocks from quarterly US data covering the period 1980-2011. We find that the evidence of a regime change in US monetary policy making during the period 1984 to 1999 is weak.
However, after the Great Moderation, September eleven, the recession that started in March 2001 and the subprime crisis are three events clearly affecting inflation targets in terms of the long-term nominal anchor. The point estimates for all the parameters involved are close similar. Moreover, Monte Carlo simulations reveal that a) the probability distribution of the discrepancy between the current inflation target and its long-term mean is statistically similar in most of cases (83%) and b) Mean squared errors to predict deviations of inflation from the long-run target are lower when using our estimation procedure.
The rest of the paper is organized as follows: The next section reminds the nonlinear Taylor rule in which we focus. Section III describes the reformulation of the state-space representation proposed. Section IV presents empirical evidence for the US, and compares our empirical findings with those based the particle filter. Finally, section V summarizes and provides concluding remarks.

The Econometric Problem
Consider the following Taylor rule with time-varying inflation targeting (Andolfatto et al. (2008)): where * r is the long-run equilibrium real interest rate, * π t denotes the inflation target, * − t t y y is the output gap, ρ is the parameter accounting for monetary policy inertia and u t represents the monetary shocks, which can be interpreted as errors underlying the central bank's control over the policy instrument. We suppose that the time evolution of this shock can be represented as follows: Combining the definition of t z with (1), the Taylor rule can be rearranged as follows: 3. State-space representation and maximum likelihood estimation Andolfatto et al. (2008) propose the following state-space representation for the monetary shocks in the above-mentioned Taylor rule: where the observable signal, ˆt ε , is the OLS estimate of the error term in the Monetary Authority's reaction function (equation 4).
As pointed out by Andolffato et al. (2008) the use of the Kalman filter is not fully optimal because t z is a mixture of a Bernoulli process and a Gaussian noise. To overcome the absence of non-normality let us consider an alternative formulation of the time evolution of t z that requires a state-space representation with state-contingent matrices in the state equation 2 . This alternative formulation (LPR-representation, hereafter) is as follows: 2 It is very well known that the state space representation of a dynamic system is not unique. In the problem at hand here a simpler and more intuitive representation is as follows: However, under this representation, numerical problems are very likely to arise because matrix , s t A either has three zero elements when S t =0 or a unit eigenvalue when S t =1, which implies nonstationarity concerns. However, our state-space representation overcomes these problems.
where:   (7), the dynamics of t z is observationally equivalent to (5) from the perspective of conditional mean.
Proof. From (6), we have that and, therefore, the conditional expectation of 1 t z + is: From (8), with probability p, 1 1 t S + = , and 1 1 1 This equation holds when: (1 ) Again from (8)  (1 ) Equations in (10) and (11)  Note that the representation that we propose is a function of the parameterϕ .
Next, we demonstrate that there is a unique value of ϕ in terms of probability p that yields the same conditional variance as in (5) for the t z process.

Proposition 2:
The LPR-representation yields the same conditional variance as in (5) for the t z process if / 2 ϕ = p .
Proof: In accordance with equation (4), the conditional variance of z t is as follows: Using our representation we have: Substituting / 2 ϕ = p into equation (13), is straightforward to get expression in (12).  Our state-space formulation, which is characterized by having Gaussian innovations, is: where: Equations (14) and (15) define a state-space system (see Hamilton, 1994, chapter 13), where (14) is the state equation and (15) Next, we describe how to get the log-likelihood function to be maximized with respect to the parameters 2 2 and g e p φ σ σ , , , : Step 1: Computing the density functions for each history: The conditional density function of ˆt ε to ( ) We derive in the appendix 1 the equations for the Kalman filter using our state-space representation.
8 where: Step 2: Computing the marginal density function of ˆt Step 3: Obtaining the log-likelihood function of ε : Once the parameters have been estimated, the probability of a regime change in the current period conditional on a given shock can be estimated as follows: where θ denotes the vector of estimated parameters.

Empirical Evidence
In this section we show how to use our estimation method to provide empirical evidence on monetary policy making for the US through the lens of a Taylor rule. We use quarterly data from the EcoWin Economic & Financial database, and in particular we collect information on interest rates, inflation and GDP for the sample period covering 1980-2011 (first quarter). We estimate the output gap by subtracting a nonlinear trend from real GDP using the Hoddrick-Prescott filter. Figure 1   A least square regression of the following Taylor rule: . . These parameters are the estimated probability of regime change ( p ), the estimated volatility of permanent and transitory shocks ( 2 e σ and 2 g σ , respectively) and the AR(1) parameter that corresponds to the time evolution of the transitory shock (φ ). The probability of regime change for the US is around 13% (that is, 1p − ), which implies a mean duration of shifts of around seven quarters. Also, as expected, the volatility of the shocks in the two regimes differs significantly. In particular, the volatility of transitory shocks is clearly lower than that of corresponding to permanent shocks. Moreover, the estimated coefficient φ is positive, statistically significant at the 1% significance level and clearly lower than one, a finding that is consistent with the assumptions made.  Moreover, to help improve conditions in private credit markets, the Committee decided to purchase up to $300 billion of longer-term Treasury securities over the next six months".
As to the time-varying estimates of the difference between the current and the long-term targeted rates, Figure 2 suggests that the short-run inflation target has been close to a constant since 1984, and extremely more volatile ( revealing that inflation did not appear as a serious concern in the short-run for the

An alternative approach: the particle filter
As a robustness check we now explore differences between the above empirical findings and those based on the use of the particle filter in order to estimate directly the state-space form (4)  For each estimation procedure, the confidence interval at conventional significance levels contains the point estimate obtained with the alternative approach.
where t z is the theoretical value of inflation-target and | 1t t z − is the estimated value either using either the Kalman-filter with the LPRrepresentation or the Particle-Filter. replications is about 17%, neither so high nor negligible, as expected from the visual inspection of Figure 3.
However, as to the mean squared error to fit the theoretical differences between the current inflation target and the long-term target, we obtain the following median values: 0.0260 0.0132 Therefore, our simulation experiment shows that our estimation procedure has a better predictive ability to forecast the discrepancy between the short and long-run inflation targets.

Conclusions
This paper proposes an estimation procedure to decompose monetary shocks into permanent and transitory components using an inertial Taylor  We provide empirical evidence on US historical monetary policy making through the lens of a Taylor during the period 1980-2011 (first quarter). Consistent with previous findings, the evidence for a regime change in the inflation target during the nineties is extremely weak. However, September eleven, the recession that started in March 2001 and the subprime crisis were significant events that affected US monetary policy making in the last decade. We check the robustness of our empirical findings on flexible inflation targeting by comparing our estimations with those obtained using the particle filter. It is showed that the estimated deviations of the short-run inflation target from its long-run counterpart are remarkably similar over time. However, our estimation procedure is associated with lower mean squared errors in order to forecast theoretical difference between the short and long-run targeted inflation rates.
Our estimation procedure allows the comparison of our conditional probabilities of time varying inflation targeting with those obtained with a regime-switching approach where, with constant long-term inflation target, responses to output gap and inflation are time-varying as in the recent paper of Klingelhöfer, and Sun (2017). In case of both estimated probabilities being close to one for a given time period, it might be interesting to assess whether regime change is jointly due to, not only a new targeting regime but also the updating of responses. We leave this extension as a topic for further research. Substituting (A.7) into (A.9):