Show Summary Details
More options …

# Journal of Time Series Econometrics

Editor-in-Chief: Hidalgo, Javier

2 Issues per year

CiteScore 2017: 0.25

SCImago Journal Rank (SJR) 2017: 0.236
Source Normalized Impact per Paper (SNIP) 2017: 0.682

Online
ISSN
1941-1928
See all formats and pricing
More options …
Volume 10, Issue 1

# The Chow-Lin method extended to dynamic models with autocorrelated residuals

Aurélien Poissonnier
• Corresponding author
• Insee, Dese, Timbre G220 15 bd Gabriel Péri BP. 100, Malakoff, Cedex 92244, France
• LMA, Crest, Malakoff, France
• Email
• Other articles by this author:
Published Online: 2017-08-26 | DOI: https://doi.org/10.1515/jtse-2016-0007

## Abstract

I provide a closed-form solution to temporal disaggregation or interpolation models which is both general in terms of dynamic structure of the model (lags of the high-frequency variable) and flexible in terms of autocorrelation of its residual. As for static models, I show that assuming autocorrelated residuals in dynamic models is practically convenient. To illustrate the potential of the solution proposed, I provide an example for quarterly non-financial corporations’ capital stock in computers and communication equipment.

JEL Classification: C22; C51; C82

## 1 Introduction

Temporal disaggregation or interpolation models are used to produce high-frequency estimates benchmarked on a low-frequency time series using high- frequency indicators. In particular, these models are currently used in numbers of countries (e.g. France, Italy, Spain, Portugal, Switzerland) to compute quarterly national accounts from annual estimates and quarterly or monthly indicators.

The models currently used are mostly static, i.e. do not include lags of the unobserved high-frequency variable: $Accoun{t}_{t}=Indicato{r}_{t}\ast \gamma +Residua{l}_{t}$. Generalizing the solution to static models by maximum likelihood to the case of dynamics models (i.e. including lags of the unobserved high-frequency variable) poses some technical difficulties due to unobservability. In this paper I show that these difficulties can be solved and that the closed form likelihood of the model can be expressed and maximized (without using a Kalman filter).

Because the left-hand side of the model is observed only at low frequency, the estimated high-frequency residual has non-standard properties. These properties depend on the autocorrelation structure assumed for the residual and some assumptions imply inconvenient statistical artifacts.1In this paper, I investigate the statistical properties of the high-frequency residual in the case of dynamic models and highlight similar results and recommendations as in the static case. I show (recall) that it is impossible to find a residual that fits the stochastic structure assumed for it. Ex-post, the dynamic structure of the residual combines the assumption made for it with the autocorrelation structure of the model. This is a property of optimal signal extraction in unobserved component models. It does not disqualify the estimation procedure but should be kept in mind when using this method. A similar recommendation as with static models ensues: for practical reasons, even in a dynamic models, errors should not be assumed to be white noise. Estimating a model with high autocorrelation of the residual limits the impact of the unexplained component on the high-frequency profile of the result.

As I show with an example, an appealing application of dynamic models is the production of quarterly accounts of stock variables such as productive capital, benchmarked on annual accounts. In this application example, I interpolate annual stocks of computers and communication equipments in non-financial corporations in France.

Temporal disaggregation or interpolation problems have been studied by (Friedman 1962) in the case of commercial banks’ stock of money, but more formally introduced in a seminal paper by Chow and Lin (1971). The question of the dynamic structure of the model’s residual has been central in the development of this literature by Bournay and Laroque (1979), Litterman (1983), and Fernandez (1981). More recent developments (Grégoir 1994; Salazar, Smith, and Weale 1997; Santos Silva and Cardoso 2001; Proietti 2006) allow for a richer dynamic in the model, introducing lags of the unobserved high-frequency variable (dynamic models). However, solving dynamic models within the Chow-Lin framework is no easy task. Di Fonzo (2003) notes for the earliest solutions that the algorithms needed to calculate the estimates and their standard errors seem rather complicated and not straightforward to be implemented in a computer program. And further demotivating the potential user, Liu and Hall (2001) show on US data that the gains of complexity may in fact be limited. Santos Silva and Cardoso (2001) overcome this difficulty and provide a straightforward solution to dynamic models under the assumption of white noise residuals. Proietti (2006) provides another solution based on state-space models and the Kalman filter numerical approximation of the likelihood. This tool is flexible enough to encompass most of the pre-existing models and allow further developments such as models in logarithms.

The present paper describes a general solution for dynamic models where the residual can be autocorrelated but the closed-form solution of the likelihood can be expressed. In this sense, the method proposed fills a gap between (Santos Silva and Cardoso 2001) and (Proietti 2006).

The remainder of this paper is organized as follows: Section 2 presents a general solution to dynamic models, Section 3 investigates the ex post stochastic properties of the model’s residual and Section 4 presents an application of this method on National Accounts data.

## 2.1 Dynamic Models: An AR(1) Introduction with Stocks

Dynamic models are an intuitive framework to describe stocks dynamic. Macroeconomists for instance describe the dynamic of capital stock $K$ with following equation:

${K}_{t}=\left(1-\tau \right){K}_{t-1}+{I}_{t}$(1)

with $I$ the flow of investment and $\tau$ a depreciation rate.

An econometrician can adapt such dynamics to compute a stocks $S$ at a higher frequency:

${S}_{t}=\rho {S}_{t-1}+{F}_{t}\gamma +{\epsilon }_{t}$(2)

with $F$ a proxy for the flows in and out of this stock2 and a depreciation rate ($\rho <1$).

Let $f$ denote the periodicity of the flows (e.g. 12 months or 4 quarters) and $N+1$ denote the number of years for which the stocks are measured. By assumption, only $\left({S}_{fn}{\right)}_{0\le n\le N}$ is known (i.e the stock every f periods), while flows are measured every period $\left({F}_{t}{\right)}_{0. The purpose of stocks interpolation is to estimate $\left({S}_{t}{\right)}_{0 (every period) using eq. (2) and $\left({F}_{t}{\right)}_{0, given the sub-sample $\left({S}_{fn}{\right)}_{0\le n\le N}$.

To estimate the model by maximum likelihood, one needs to abstract from the unobserved variable (the high-frequency capital stock) and summarize the constraint imposed on the residuals by the model. To do so, one can iterate eq. (2) in the following way: ${S}_{t}={\rho }^{f}{S}_{t-f}+\sum _{i=0}^{f-1}{\rho }^{i}\left({F}_{t-i}\gamma +{\epsilon }_{t-i}\right)$(3)$⇔\sum _{i=0}^{f-1}{\rho }^{i}{\epsilon }_{t-i}={S}_{t}-{\rho }^{f}{S}_{t-f}-\sum _{i=0}^{f-1}{\rho }^{i}{F}_{t-i}\gamma$(4)

Let ${\theta }_{t}$ denote $\sum _{i=0}^{f-1}{\rho }^{i}{\epsilon }_{t-i}$, one then isolates a combination of residuals on the one side and a combination of observed variables on the other:

${\theta }_{t}={S}_{t}-{\rho }^{f}{S}_{t-f}-\sum _{i=0}^{f-1}{\rho }^{i}{F}_{t-i}\gamma$(5)

Equation (5) is the constraint imposed on the residuals by the low frequency data.3 The maximization of the likelihood of model (2) can be performed subject to this constraint.

Matrix notation Let $\mathrm{\Sigma }$ denote the $\left(\left(N+1\right)×1\right)$ vector of annual stocks (from year 0 to N) and $F$ the $\left(fN×x\right)$ vectors of flows. Let $\mathrm{\Theta }$ denote the vector $\left[{\theta }_{f},{\theta }_{2f}\dots ,{\theta }_{fN}\right]$. And let $A\otimes B$ denote the Kronecker product of $A$ by $B$. Equation (5) becomes:

$\mathrm{\Theta }=\left[\begin{array}{ccccc}-{\rho }^{f}& 1& 0& \dots & 0\\ 0& -{\rho }^{f}& 1& 0& \dots \\ & & \ddots & \ddots & 0\\ & \dots & 0& -{\rho }^{f}& 1\end{array}\right]\mathrm{\Sigma }-Id\left(N\right)\otimes \left[\begin{array}{cccc}{\rho }^{f-1}& \dots & \rho & 1\end{array}\right]X\gamma$(6)

with $Id\left(N\right)$ the identity matrix of size $N$. With straightforward notations, I can write $\mathrm{\Theta }={M}_{1}\left(\rho \right)\mathrm{\Sigma }-{M}_{2}\left(\rho \right)X\gamma$(7)

Maximum likelihood To compute the likelihood of the model, one should assume a stochastic structure for the high-frequency residual $\epsilon$. Let $\mathrm{\Omega }$ denote its variance-covariance matrix. The maximization of the likelihood of the model is:

$\underset{E,\rho ,\gamma ,\mathrm{\Omega }}{\text{Max}}\frac{1}{{\sqrt{2\pi }}^{fN}{\left|\mathrm{\Omega }\right|}^{\frac{1}{2}}}exp\left(-\frac{1}{2}{E}^{\prime }{\mathrm{\Omega }}^{-1}E\right)$(8)

$\text{s}.\text{t}.\phantom{\rule{2em}{0ex}}{M}_{2}E=\mathrm{\Theta }$(9)

with $E$ the $\left(fN×1\right)$ vector of $\epsilon$ and the simplifying notation ${M}_{2}={M}_{2}\left(\rho \right)$. Note also that $\mathrm{\Theta }$ depends on the parameters of the model and the observed variables.

#### Proposition 1:

A formal solution for $E$

For any value of the parameters to be estimated, the optimization with respect to $E$ yields the following solution: $\stackrel{ˆ}{E}=\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$(10)

#### Proposition 2:

Concentrated likelihood of this problem Knowing the functional form of $\stackrel{ˆ}{E}$, to estimate the model’s parameters, problem (8) can be summarized into: $\underset{\rho ,\gamma ,\mathrm{\Omega }}{\text{Max}}\phantom{\rule{2em}{0ex}}\frac{1}{{\sqrt{2\pi }}^{fN}{\left|\mathrm{\Omega }\right|}^{\frac{1}{2}}}exp\left(-\frac{1}{2}{\mathrm{\Theta }}^{\prime }{\left({\mathrm{\Omega }}^{\theta }\right)}^{-1}\mathrm{\Theta }\right)$(11)

with ${\mathrm{\Omega }}^{\theta }={M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }$ the variance-covariance matrix of $\mathrm{\Theta }$.

In problem (11), $\mathrm{\Theta }$ depends on the low frequency variables and the parameters $\rho$ and $\gamma$, $\mathrm{\Omega }$ is a variance-covariance matrix which is by assumption regular and depends on a limited set of parameters (e.g. innovation’s variance and autocorrelation) and ${\mathrm{\Omega }}^{\theta }$ depends on $\mathrm{\Omega }$ and $\rho$. As a consequence, the analytical formula for the likelihood can be written and maximized directly. It is then not necessary to use a state-space representation and the numerical approximation of the likelihood by the Kalman filter to find the optimal values for the parameters. Note that, although it is close to the likelihood function of $\mathrm{\Theta }$, the objective function to be maximized is slightly different (the determinant is that of $\mathrm{\Omega }$ not ${\mathrm{\Omega }}^{\theta }$).

## 2.2 General Solution

The use of dynamic models in the case of flow variables may seem less intuitive than for stock variables. However, models in first differences or growth rates tend to be more accurate in some flow cases (Eurostat 2013, 5.C31), therefore a dynamic framework can be useful also for flows.

The dynamic model for flows can be treated in the same framework as exposed above. In this section, I derive the general case for a dynamic model with $p$ lags which can be applied to either stocks or flows.

${Y}_{t}=\sum _{i=1}^{p}{\rho }_{i}{Y}_{t-i}+{X}_{t}\beta +{\epsilon }_{t}$(12)

with ${Y}_{t}$ an account and ${X}_{t}$ high frequency indicators. By assumption, only $\left({Y}_{n}^{lf}=\sum _{i=0}^{f-1}{Y}_{fn-i}{\right)}_{1\le n\le N}$ is known (i.e the cumulated flow every f periods), while the indicators are measured every period $\left({X}_{t}{\right)}_{1.

Using vector notations

$\left[\begin{array}{c}{Y}_{1}\\ ⋮\\ {Y}_{Nf}\end{array}\right]=Y=\left[\begin{array}{ccccc}{\rho }_{p}& \cdots & {\rho }_{1}& 0& 0\\ \ddots & \cdots & \ddots & \ddots & 0\\ 0\cdots & {\rho }_{p}& \cdots & {\rho }_{1}& 0\end{array}\right]\left[\begin{array}{c}{Y}_{1-p}\\ ⋮\\ {Y}_{0}\\ {Y}_{1}\\ ⋮\\ {Y}_{Nf}\end{array}\right]+X\beta +E$(13)

Isolating the LHS vector in the RHS of eq. (13) yields the following:

$Y=\left[\begin{array}{cccc}0& \cdots & \cdots & 0\\ {\rho }_{1}& 0& \ddots & 0\\ {\rho }_{p}& \ddots & \ddots & \\ 0& \ddots & \ddots & 0\\ 0\cdots & {\rho }_{p}\cdots & {\rho }_{1}& 0\end{array}\right]\left[\begin{array}{c}{Y}_{1}\\ ⋮\\ {Y}_{Nf}\end{array}\right]+X\beta +E+\left[\begin{array}{ccc}{\rho }_{p}& \cdots & {\rho }_{1}\\ ⋮& \ddots & ⋮\\ 0& \cdots & {\rho }_{p}\end{array}\right]\left[\begin{array}{c}{Y}_{1-p}\\ ⋮\\ {Y}_{0}\end{array}\right]$(14)

With straightforward notations,

$Y={M}_{4}\left({\rho }_{1},\cdots ,{\rho }_{p}\right)Y+X\beta +E+{M}_{5}\left({\rho }_{1},\cdots ,{\rho }_{p}\right){Y}^{init}$(15)

$Y={\left(Id-{M}_{4}\left({\rho }_{1},\cdots ,{\rho }_{p}\right)\right)}^{-1}\left(X\beta +E+{M}_{5}\left({\rho }_{1},\cdots ,{\rho }_{p}\right){Y}^{init}\right)$(16)

Summing the $f$ subperiods of each year with matrix ${M}_{2}\left(1\right)$ allows to isolate a combination of high-frequency residuals as a function of the observed variables and the parameters.

$\begin{array}{l}{Y}^{lf}-{M}_{2}\left(1\right){\left(Id-{M}_{4}\left({\rho }_{1},\cdots ,{\rho }_{p}\right)\right)}^{-1}\left(X\beta +{M}_{5}\left({\rho }_{1},\cdots ,{\rho }_{p}\right){Y}^{init}\right)\\ \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}={M}_{2}\left(1\right){\left(Id-{M}_{4}\left({\rho }_{1},\cdots ,{\rho }_{p}\right)\right)}^{-1}E\end{array}$(17)

$\mathrm{\Theta }={M}_{6}E$(18)

Equation (18) summarizes the constraint imposed by the model on the residuals. From this expression, propositions 1 and 2 can be directly applied to estimate all parameters (${\rho }_{1},\cdots {\rho }_{p},\gamma ,{Y}_{0},\cdots {Y}_{1-p}$, and the variance and autocorrelation of $\epsilon$) and both the high-frequency account and residual.

Passing from flows to stocks requires to define the low-frequency data not as an aggregation of the high-frequency ones but as observing the high-frequency data every $f$ periods. This is simply done by changing matrix ${M}_{2}\left(1\right)$ by ${M}_{2}\left(0\right)$ in eq. (17) to define the aggregation constraint (18). Propositions 1 and 2 can then be applied to estimate the parameters of either models by maximum likelihood.

In the introductory example with a stock variable and only one lag, the initial points (${Y}^{init}$) can be taken from the observed low-frequency data. With a flow case or more than one lag, the initialization values are unobserved have to be passed as additional parameters, a situation identical to standard moving average models.

## 3.1 Theoretical Properties

Once the model is estimated, only some information at low frequency on the residual (the combination $\mathrm{\Theta }$) is known. Equation (10) allows to smooth this residual at the higher frequency. As $\epsilon$ is an exogenous and unexplained component, in practice one may want to limit its influence on the profile of the interpolated stock within the years. Otherwise, interpretation or econometric results based on interpolated stocks could be discredited by potential statistical artifacts. The typical issue for quarterly national accounts is a residual with a jump every first quarter. As I show in Section 3.1 and the example in 3.2, the statistically optimal distribution of the high-frequency residual in dynamic models can induce undesirable features such as this jump every first quarter.

Theoretical variance-covariance matrix of the estimated residual Given formula (10), the estimated residual is equal to $\stackrel{ˆ}{E}=\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$ while by definition of ${M}_{2}$, $\mathrm{\Theta }={M}_{2}E$. Thus the theoretical variance-covariance matrix of the estimated process $\stackrel{ˆ}{E}$ is:

$\mathbb{E}\left(\stackrel{ˆ}{E}{\stackrel{ˆ}{E}}^{\prime }\right)={\mathrm{\Omega }}^{E}=\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}{M}_{2}\mathrm{\Omega }$(19)

It has the following property:

${M}_{2}{\mathrm{\Omega }}^{E}={M}_{2}\mathrm{\Omega }$(20)

${\mathrm{\Omega }}^{E}{{M}_{2}}^{\prime }=\mathrm{\Omega }{{M}_{2}}^{\prime }$(21)

If $\mathrm{\Omega }={\mathrm{\Omega }}^{E}$ implies eqs. (20) and (21), the reciprocal is false since ${M}_{2}^{\prime }{M}_{2}$, contrary to ${M}_{2}{{M}_{2}}^{\prime }$, is not invertible.

Hence, $\stackrel{ˆ}{E}$ does not have the same stochastic properties as $E$, but combines the assumption made for $E$ (through the matrix $\mathrm{\Omega }$) with the dynamics of the model (${M}_{2}$).

An application of the Wiener-Kolmogorov optimal signal extraction theory The previous result can be linked to the signal extraction framework developed by Wiener and Kolmogorov.

Conditional on the values of the parameters to be estimated, the problem can be written as follows: ${\theta }_{fn}=\sum _{i=1}^{f}{\omega }_{n,i}\phantom{\rule{1em}{0ex}}\text{with}\phantom{\rule{1em}{0ex}}{\omega }_{n,i}={\rho }^{f-i}{\epsilon }_{f\left(n-1\right)+i}$(22)

${\theta }_{fn}$ observed every $f$ period is the sum of $f$ signals ${\left({\omega }_{n,i}\right)}_{1\le n\le N}$ (Figure 1).

Figure 1:

Timeline example with quarterly data ($f=4$).

On a two-sided infinite sample, the optimal filter to extract each of these signals takes the form:

${\stackrel{ˆ}{\omega }}_{n,i}=\sum _{-\mathrm{\infty }}^{\mathrm{\infty }}{\gamma }_{s,i}{\theta }_{f\left(y-s\right)}={\gamma }_{i}\left(L\right){\theta }_{fy}$(23)

with ${\gamma }_{i}\left(L\right)=\sum _{-\mathrm{\infty }}^{\mathrm{\infty }}{\gamma }_{s,i}{L}^{s}$, $L$ the lag operator for annual data.

We know from signal extraction theory (e.g. Whittle 1963) that ${\gamma }_{i}$ depends on the covariance generating functions of the $f+1$ processes ${\left({\theta }_{fn},{\omega }_{n,1},\cdots ,{\omega }_{n,f}\right)}_{n\in \mathbb{Z}}$.

${\gamma }_{i}\left(L\right)=\frac{{f}_{{\omega }_{i},{\omega }_{i}}\left(L\right)+\sum _{j\ne i}{f}_{{\omega }_{i},{\omega }_{j}}\left(L\right)}{{f}_{\theta ,\theta }\left(L\right)}$(24)

In the most simple case where $\epsilon$ is an i.i.d white noise of variance ${\sigma }^{2}$ the formula simplifies to:

${\gamma }_{i}\left(L\right)=\frac{{\rho }^{2\left(f-i\right)}{\sigma }^{2}}{\sum _{i=1}^{f}{\rho }^{2\left(f-i\right)}{\sigma }^{2}}$(25)

$=\frac{{\rho }^{2\left(f-i\right)}}{\frac{1-{\rho }^{2f}}{1-{\rho }^{2}}}$(26)

which yields

${\stackrel{ˆ}{\omega }}_{n,i}=\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}{\rho }^{2\left(f-i\right)}{\theta }_{fn}$(27)

${\stackrel{ˆ}{\epsilon }}_{f\left(n-1\right)+i}=\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}{\rho }^{f-i}{\theta }_{fn}$(28)

Given the i.i.d hypothesis, this result is identical to the solution in finite sample detailed in Section 3.2.1.

In the static case, ${M}_{2}\left(\rho \right)$ should be replaced by ${M}_{2}\left(1\right)$ in the definition of $\mathrm{\Theta }$, in other words $\rho =1$ in the definition of ${\omega }_{n,i}$, thus eq. (25) simplifies into ${\gamma }_{i}\left(L\right)=\frac{1}{f}$ which yields the standard result ${\stackrel{ˆ}{\omega }}_{n,i}={\stackrel{ˆ}{\epsilon }}_{f\left(n-1\right)+i}=\frac{{\theta }_{fn}}{f}$ exemplified by Figure 3b.

Since if ${y}_{t}=A\left(L\right){x}_{t}$ then ${f}_{y,y}\left(L\right)=A\left(L\right)A\left({L}^{-1}\right){f}_{x,x}\left(L\right)$, the autocovariance generating function of ${\stackrel{ˆ}{\omega }}_{n,i}$ is:

${f}_{{\stackrel{ˆ}{\omega }}_{i},{\stackrel{ˆ}{\omega }}_{i}}\left(L\right)={\gamma }_{i}\left(L\right){\gamma }_{i}\left({L}^{-1}\right){f}_{\theta ,\theta }\left(L\right)$(29) $=\frac{\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left(L\right)\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left({L}^{-1}\right)}{\sum _{i=1}^{f}\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left({L}^{-1}\right)}$(30)from which the property ${f}_{{\stackrel{ˆ}{\omega }}_{i},{\stackrel{ˆ}{\omega }}_{i}}={f}_{{\omega }_{i},{\omega }_{i}}$ is not true in general.

The covariance generating function of ${\stackrel{ˆ}{\omega }}_{i}$ with ${\stackrel{ˆ}{\omega }}_{k}$ is: ${f}_{{\stackrel{ˆ}{\omega }}_{i},{\stackrel{ˆ}{\omega }}_{k}}\left(L\right)={\gamma }_{i}\left(L\right){\gamma }_{k}\left({L}^{-1}\right){f}_{\theta ,\theta }\left(L\right)$(31) $=\frac{\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left(L\right)\sum _{j=1}^{f}{f}_{{\omega }_{k},{\omega }_{j}}\left({L}^{-1}\right)}{\sum _{i=1}^{f}\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left({L}^{-1}\right)}$(32) $=\frac{\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left(L\right)\sum _{j=1}^{f}{f}_{{\omega }_{j},{\omega }_{k}}\left(L\right)}{\sum _{i=1}^{f}\sum _{j=1}^{f}{f}_{{\omega }_{i},{\omega }_{j}}\left({L}^{-1}\right)}$(33)

and here again, the property ${f}_{{\stackrel{ˆ}{\omega }}_{i},{\stackrel{ˆ}{\omega }}_{k}}={f}_{{\omega }_{i},{\omega }_{k}}$ is not true in general.

In particular in the white noise case, even though we assumed that there is no correlation between sub-periods within and across the years (${f}_{{\omega }_{i},{\omega }_{k}}=0$), the estimated process shows correlation within the years (${f}_{{\stackrel{ˆ}{\omega }}_{i},{\stackrel{ˆ}{\omega }}_{k}}=\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}{\rho }^{2\left(2f-i-k\right)}$).

This property does not invalidate the estimation procedure but is a general result of optimal signal extraction in unobserved component models. This property is also not due to the sample size.

## 3.2 Three Examples for the High-Frequency Residual

To estimate the value of the depreciation rate $\rho$ and the vector $\gamma$, one should assume a stochastic structure for the high-frequency residual $\epsilon$ in order to maximize the concentrated likelihood eq. (11).

## 3.2.1 White Noise Residuals ε

It is straightforward to see that ${\theta }_{fi}$ is a linear combination of $\left[{\epsilon }_{\left(i-1\right)f+1},{\epsilon }_{\left(i-1\right)f+2},\cdots ,{\epsilon }_{if}\right]$. It is independent of ${\theta }_{fj}$ $\mathrm{\forall }j\ne i$ when $\epsilon$ is white noise (see Figure 1).

If one assumes $\epsilon \sim {N}\left(0,{\sigma }^{2}\right)$, then $\theta \sim {N}\left(0,{\sigma }^{2}\frac{1-{\rho }^{2f}}{1-{\rho }^{2}}\right)$. The log-likelihood to be maximized then becomes:

${L}\left(\mathrm{\Sigma },X,\rho ,\sigma \right)=-\frac{N}{2}log\left(2\pi \right)-\frac{N}{2}\text{\hspace{0.17em}}log\left({\sigma }^{2}\right)-\frac{{\mathrm{\Theta }}^{\prime }\mathrm{\Theta }}{2{\sigma }^{2}\frac{1-{\rho }^{2f}}{1-{\rho }^{2}}}$(34)

Given the annual discrepancies $\mathrm{\Theta }$, the optimal values for the residual at high frequency ($E$) is given by eq. (10), which simplifies into:

$\stackrel{ˆ}{E}={{M}_{2}}^{\prime }\left({M}_{2}{{M}_{2}}^{\prime }{\right)}^{-1}\mathrm{\Theta }$(35)

$\text{which can be further simplified into}\stackrel{ˆ}{E}=\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}{{M}_{2}}^{\prime }\mathrm{\Theta }$(36)

As a consequence, the sequence of $\epsilon$ is

${\epsilon }_{f\left(n-1\right)-i}=\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}{\rho }^{f-i}{\theta }_{nf}\phantom{\rule{2em}{0ex}}\text{with}\phantom{\rule{2em}{0ex}}1\le i\le f\phantom{\rule{2em}{0ex}}1\le n\le N$(37)

which is identical to the result in infinite sample eq. (28).

Ex-post, the high-frequency residuals, although assumed to be white noise, are not stationary within each year (but follow a geometric series with coefficient $\frac{1}{\rho }>1$). Indeed, the variance covariance matrix of the process once reconstructed at high frequency reads

${\mathrm{\Omega }}^{E}={\sigma }^{2}Id\left(N\right)\otimes \left(\frac{1-{\rho }^{2}}{1-{\rho }^{2f}}\right)\left[\begin{array}{ccccc}1& {\rho }^{-1}& {\rho }^{-2}& \dots & {\rho }^{-f}\\ {\rho }^{-1}& 1& {\rho }^{-1}& \dots & {\rho }^{-f-1}\\ ⋮& & & \ddots & ⋮\\ {\rho }^{-f}& \dots & {\rho }^{-2}& {\rho }^{-1}& 1\end{array}\right]$(38)

One can recognize the Kronecker product of the variance covariance matrix of an i.i.d white noise at low frequency and an autocorrelated but not stationary process at high frequency.

Moreover, estimated shocks exhibit breaks every $f$ periods. The magnitude of these breaks increases with the variance of $\theta$. Figure 2 illustrates this undesired property on a simulated sample of 15 years.

Figure 2

Simulated and estimated residual under white noise hypothesis, in a stock model ($\rho =0.8$).

Figure 3 shows that the same result holds for the optimal method applied to flow variables in a static model -when $\rho =0$- which justified the developments proposed by Bournay and Laroque (1979), Fernandez (1981), and Litterman (1983). Figure 3 also shows that even in a dynamic model for flows the solution with a white noise hypothesis exhibits jumps every first quarter. Using state-space models on Swiss data to estimate a monthly GDP, Cuche and Hess (2000) may have wrongfully disregarded dynamic specifications for this reason: they only consider white noise residuals in these cases and find that the outcome of such models generates an unexplained cyclical pattern. Although residuals are white noise in both cases, Figure 3.a is different from Figure 2 because in a stock model only ${\theta }_{ft}$ is observed while in a flow model $\sum _{i=0}^{f-1}{\theta }_{ft-i}$ is. In particular, one can check that when the dynamic component of the model diminishes ($\rho \to 0$), the treatment of the discrepancy tends to simply divide the annual residual by the number of sub-periods (4 quarters or 12 months), as it is the case for the static model. In other words, it is only with highly autocorrelated models that the impact at higher frequency of the estimated residual on the final estimates can be minimized.

Figure 3

Simulated and estimated residual under white noise hypothesis in a flow model.

## 3.2.2 Autocorrelated Residuals (AR(1), Generalization of (Chow and Lin 1971))

I now assume that the high-frequency residual follow an AR(1) process:

${\epsilon }_{t}=\mu {\epsilon }_{t-1}+{\eta }_{t}$(39)

with $\eta$ a white noise.

Figure 4 shows that when the high-frequency residual is assumed to follow an AR(1) process, the estimation yields to a much smoother result. However, while the underlying AR(1) is slightly autocorrelated ($\mu =0.3$), the estimated AR(1) is markedly so. As it is expected from formula (19), the estimated shocks at high-frequency encompasses both dynamics, that of the theoretical shock and that of the model.

Figure 4

Simulated and estimated residual under AR(1) hypothesis ($\mu =0.3$), in a stock model ($\rho =0.8$).

## 3.2.3 Random Walk Residual, Generalization of (Fernandez 1981)

I assume that:

${\epsilon }_{t}={\epsilon }_{t-1}+{\eta }_{t}$(40)

with $\eta$ a stationary process of innovations. In this particular case, the likelihood to be maximized should be modified into: $\underset{E,\rho ,\gamma ,{\mathrm{\Omega }}^{\eta }}{\text{Max}}\phantom{\rule{2em}{0ex}}\frac{1}{{\sqrt{2\pi }}^{fN}\left|{\mathrm{\Omega }}^{\eta }\right|}exp\left(-\frac{1}{2}{\left({D}_{1}E\right)}^{\mathrm{\prime }}{\left({\mathrm{\Omega }}^{\eta }\right)}^{-1}{D}_{1}E\right)$(41)

$\text{s}.\text{t}.\phantom{\rule{2em}{0ex}}{M}_{2}E=\mathrm{\Theta }$(42)

with ${\mathrm{\Omega }}^{\eta }$ the variance–covariance matrix of the innovation $\eta$ and ${D}_{1}$ the first difference operator.

${D}_{1}=\left[\begin{array}{ccccc}-1& 1& 0& & \\ 0& -1& 1& 0& \\ & & \ddots & \ddots & \\ & & 0& -1& 1\end{array}\right]$(43)

#### Proposition 3:

This problem can be treated similarly to eqs. (8), (9).

The proof for this proposition is given in appendix, see also Insee’s Quarterly National Accounts methodology (Insee, Quarterly National Accounts Division 2012).

A tractable approximation for this solution is to use a square version of ${D}_{1}$ so that ${{D}_{1}}^{\prime }{D}_{1}$ is invertible :

${D}_{1}=\left[\begin{array}{ccccc}1& 0& 0& & \\ -1& 1& 0& & \\ 0& -1& 1& 0& \\ & & \ddots & \ddots & \\ & & 0& -1& 1\end{array}\right]$(44)

in which case the solution for $E$ is:

$\stackrel{ˆ}{E}={\left({{D}_{1}}^{\prime }\left({\mathrm{\Omega }}^{\eta }{\right)}^{-1}{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }{\left({M}_{2}{\left({{D}_{1}}^{\prime }\left({\mathrm{\Omega }}^{\eta }{\right)}^{-1}{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$(45)

Following (Fernandez 1981) I assume $\eta$ to be white noise, ${\mathrm{\Omega }}^{\eta }=Id$, this solution simplifies into:

$\stackrel{ˆ}{E}={\left({{D}_{1}}^{\prime }{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }{\left({M}_{2}{\left({{D}_{1}}^{\prime }{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$(46)

Figure 5

Simulated and estimated residual under random walk hypothesis, in a stock model ($\rho =0.8$).

Fernandez 1981 shows in the case of static models that assuming the residuals follow a random walk is equivalent to distribute an annual discrepancy as in Denton (1971). Equation (46), analogous to Denton ’s result, provides a similar conclusion for dynamic models. The practical advantage of this assumption is that, by construction of the method proposed by (Denton 1971), it minimizes the variations of the residual from period to period (see Figure 5). Hence, ex-post, the profile of the high frequency estimate is impacted as little as possible by the unexplained component of the model.

## 3.2.4 ARIMA(1,1,0), Generalization of (Litterman 1983)

I assume that:

$\mathrm{\Delta }{\epsilon }_{t}=\mu \mathrm{\Delta }{\epsilon }_{t-1}+{\eta }_{t}$(47)

with $\eta$ a stationary process of innovations and $\mathrm{\Delta }$ the first difference operator.

Following (Litterman 1983) I assume that the residual follows an ARIMA(1,1,0), so that the solution for E is:

$\stackrel{ˆ}{E}={\left({{D}_{1}}^{\prime }\left({\mathrm{\Omega }}^{\eta }{\right)}^{-1}{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }{\left({M}_{2}{\left({{D}_{1}}^{\prime }\left({\mathrm{\Omega }}^{\eta }{\right)}^{-1}{D}_{1}\right)}^{-1}{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$(48)

with ${\mathrm{\Omega }}^{\eta }$ the covariance matrix of a first-order autocorrelated process. A simulation result is presented in Figure 6.

Figure 6

Simulated and estimated residual under an ARIMA(1,1,0) hypothesis ($\mu =0.3$), in a stock model ($\rho =0.8$).

## 4 Example: Non-financial Corporations’ Capital in Computers and Communication Equipment

Using annual data for non-financial corporations from the annual accounts and quarterly investment from the quarterly accounts, I estimate the following model:

${K}_{t}=\rho {K}_{t-1}+\nu {I}_{t}+{\epsilon }_{t}$(49)

and either

$\begin{array}{rl}\text{(i)}\phantom{\rule{2em}{0ex}}& {\epsilon }_{t}=\mu {\epsilon }_{t-1}+{\eta }_{t}\\ \text{(ii)}\phantom{\rule{2em}{0ex}}& {\epsilon }_{t}-{\epsilon }_{t-1}=\mu \left({\epsilon }_{t-1}-{\epsilon }_{t-2}\right)+{\eta }_{t}\\ \text{(iii)}\phantom{\rule{2em}{0ex}}& {\epsilon }_{t}={\epsilon }_{t-1}+{\eta }_{t}\\ \text{with}\phantom{\rule{2em}{0ex}}& {\eta }_{t}\sim {N}\left(0,{\sigma }^{2}\right)\end{array}$

with $K$ the stock of capital in computers and communication equipment and $I$ the GFCF in computers, electronic and optical products.

Parameter $\rho$ is equal to one minus a constant quarterly depreciation rate.

Parameter $\nu$ accounts for the fact that investment and capital are taken in slightly different nomenclatures and also that there is depreciation within the first period when an equipment is bought. Parameters $\mu$ and $\sigma$ are the autocorrelation of the high-frequency residual (or its first difference in the I(1) case) and the standard error of its innovation.

Rationale for testing a constant depreciation rate. The annual accounts for capital are built using the perpetual inventory method at a very detailed level. This method assumes that the life length of an equipment follows a truncated log-normal distribution calibrated to verify some information on the average duration of the equipment. Between its purchase and its destruction, the equipment is also assumed to be linearly depreciated. Although this method does not assume a constant depreciation rate, the combination of the two preceding hypothesis yields depreciation coefficients which are decreasing and convex and thus can be approximated by a geometric series.

For computers and communication equipment, the assumptions made in the permanent inventory method in France are best approximated using an OLS estimation with a quarterly depreciation rate of $0.87$ and $0.93$ respectively.

Using quarterly accounts for investment, it is not possible to rely on a permanent inventory method since the data are not available at the same detailed level. Also, this method requires to initialize the time series by “sacrificing” as many points as the maximum duration of the equipment (20 years for communication equipment and 10 years for computers) while with the present method the quarterly time series is initialized by the first annual value available.

Table 1:

Results for the optimization of a capital model for non-financial corporations in computers and communication equipment.

I estimated the dynamic model with the following constraints on the parameters: $0<\rho <1$, $0<\mu <1$, $0<\sigma <\mathrm{\infty }$ and $0<\nu <2$.

The results from three different specifications of the error term in eq. (49) are gathered in Table 1. Based on the likelihood criteria, the favoured model is (ii), with I(1) errors and autocorrelation in its innovations (see optimisation results in Appendix B). The estimated time series is displayed on Figure 7. The depreciation coefficient equals $0.96$ which is consistent with the value estimated for communication equipment from the annual hypothesis of the permanent inventory method. The standard deviation of the errors is smaller than 3 (million euro of 2005) while for comparison, the standard error of the quarterly changes in investment is larger than 40 (million euro of 2005). Figure 8 shows the contribution of the error and the investment to quarterly changes in capital. The contributions correspond to the three terms in the general solution (16), respectively the contributions of the flow indicator (investment), the residual and the initial value. The contribution of the residual is consistently negative yet the underlying innovation has a mean statistically not different from 0. This contribution is due in part to the high autocorrelation of the residual itself but also to the high autocorrelation of the model itself which affects the contribution through the multiplication by the inverse of $Id-{M}_{4}$ in eq. (16). As for the contribution of the initial value, similarly to a first order moving average it is gradually declining to 0. It is compared with a counter-factual interpolation without indicator. As expected the error accounts for a smaller share of the result and its volatility than investment.

Quarterly investment in divisions 26 and 27 of NAF rev2 (classification of products) is 5 to 10% smaller than the annual investment time series corresponding to computers and communication equipment (assets AN.111321 and AN.11322 in Eurostat’s classification of assets). One would thus expect $\nu$ to be larger than one. Yet, the estimated value of $\nu$ is close to one, indicating that there is depreciation of the equipment during the first period when it is purchased, which is consistent with the method used for the annual counterpart. This result shall be kept in mind by macro-economists when writing the dynamics of capital in a model.

Figure 7

Quarterly capital of non-financial corporations in computers and communication equipment.

Figure 8

Quarterly capital of non-financial corporations in computers and communication equipment (Quarterly Changes).

## A Proofs

Proof to proposition 1 The optimization program can be rewritten:

$\underset{E}{Min}\phantom{\rule{2em}{0ex}}\frac{1}{2}{E}^{\prime }{\mathrm{\Omega }}^{-1}E$(50)

$s.t.\phantom{\rule{2em}{0ex}}{M}_{2}E=\mathrm{\Theta }$(51)

The first-order condition of this program reads, with $\lambda$ the corresponding Lagrange multiplier:

$\left[\begin{array}{cc}{\mathrm{\Omega }}^{-1}& -{{M}_{2}}^{\prime }\\ {M}_{2}& 0\end{array}\right]\left[\begin{array}{c}E\\ \lambda \end{array}\right]=\left[\begin{array}{c}0\\ \mathrm{\Theta }\end{array}\right]$(52)

From this system, one can isolate $\lambda ={\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$ and consequently $E=\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$.$\phantom{\rule{2em}{0ex}}■$

Proof to proposition 2 Since solution (10) holds and verifies the constraint eq. (9), one can eliminate the constraint from the program and replace:

${E}^{\prime }{\mathrm{\Omega }}^{-1}E={\left(\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }\right)}^{\prime }{\mathrm{\Omega }}^{-1}\left(\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }\right)$(53)

$={\mathrm{\Theta }}^{\prime }{{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}}^{\prime }{M}_{2}\mathrm{\Omega }{\mathrm{\Omega }}^{-1}\mathrm{\Omega }{{M}_{2}}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }$(54)

$={\mathrm{\Theta }}^{\prime }{{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}}^{\prime }\mathrm{\Theta }$(55)

$={\mathrm{\Theta }}^{\prime }{\left({M}_{2}\mathrm{\Omega }{{M}_{2}}^{\prime }\right)}^{-1}\mathrm{\Theta }={\mathrm{\Theta }}^{\prime }{\left({\mathrm{\Omega }}^{\theta }\right)}^{-1}\mathrm{\Theta }$(56)

since $\mathrm{\Omega }$ and ${\mathrm{\Omega }}^{\theta }$ are symmetric.$\phantom{\rule{2em}{0ex}}■$

Proof to proposition 3 The program can be rewritten:

$\underset{{\epsilon }_{1},dE}{Min}\phantom{\rule{2em}{0ex}}\frac{1}{2}d{E}^{\prime }{\left({\mathrm{\Omega }}^{\eta }\right)}^{-1}dE$(57)

$s.t.\phantom{\rule{2em}{0ex}}{M}_{2}T\left[\begin{array}{c}{\epsilon }_{1}\\ dE\end{array}\right]=\mathrm{\Theta }$(58)

with

$T=\left[\begin{array}{cccc}1& 0& \cdots & 0\\ 1& 1& 0& \cdots \\ 1& 1& \ddots & 0\\ 1& 1& \cdots & 1\end{array}\right]\phantom{\rule{2em}{0ex}}⇒\phantom{\rule{2em}{0ex}}T\left[\begin{array}{c}{\epsilon }_{1}\\ dE\end{array}\right]=E$(59)

The Lagrangian of this program is:

${L}=\frac{1}{2}d{E}^{\prime }{\left({\mathrm{\Omega }}^{\eta }\right)}^{-1}dE-{\lambda }^{\prime }\left({C}_{{M}_{2}T}^{1}\text{\hspace{0.17em}}{\epsilon }_{1}+\stackrel{‾}{{M}_{2}T}dE-\mathrm{\Theta }\right)$(60)

with ${C}_{{M}_{2}T}^{1}$ the first column of ${M}_{2}T$, i.e. a $\left(N×1\right)$ vector of $\frac{1-{\rho }^{f}}{1-\rho }$ and $\stackrel{‾}{{M}_{2}T}$ the other columns. The minimization yields :

${\lambda }^{\prime }{C}_{{M}_{2}T}^{1}=0$(61)

${\left({\mathrm{\Omega }}^{\eta }\right)}^{-1}dE={\stackrel{‾}{{M}_{2}T}}^{\prime }\lambda$(62)

${C}_{{M}_{2}T}^{1}\text{\hspace{0.17em}}{\epsilon }_{1}+\stackrel{‾}{{M}_{2}T}dE=\mathrm{\Theta }$(63)

From this system, one can isolate ${\epsilon }_{1}=\frac{{{C}_{{M}_{2}T}^{1}}^{\prime }{\left(\stackrel{‾}{{M}_{2}T}{\mathrm{\Omega }}^{\eta }{\stackrel{‾}{{M}_{2}T}}^{\prime }\right)}^{-1}\mathrm{\Theta }}{{{C}_{{M}_{2}T}^{1}}^{\prime }{\left(\stackrel{‾}{{M}_{2}T}{\mathrm{\Omega }}^{\eta }{\stackrel{‾}{{M}_{2}T}}^{\prime }\right)}^{-1}{C}_{{M}_{2}T}^{1}}$ and $dE={\mathrm{\Omega }}^{\eta }{\stackrel{ˆ}{{M}_{2}T}}^{\prime }{\left(\stackrel{‾}{{M}_{2}T}{\mathrm{\Omega }}^{\eta }{\stackrel{‾}{{M}_{2}T}}^{\prime }\right)}^{-1}\left(\mathrm{\Theta }-{C}_{{M}_{2}T}^{1}\text{\hspace{0.17em}}{\epsilon }_{1}\right)$. Once ${\epsilon }_{1}$ and $dE$ are known, it is straightforward to compute $E$ using matrix $T$. $\phantom{\rule{2em}{0ex}}■$

## B Numerical Results of the Maximization

The Hessian matrix at the optimum when minimizing the opposite of the log-likelihood of the I(1) model reads:

$H=\left[\begin{array}{cccc}40060180& -59033.016977& -0.07228564& 2422665\\ -59033.02& 1598.610727& 7.94060240& -3918.249\\ -0.07228564& 7.940602& 77.56226762& -0.04323517\\ 2422665& -3918.248714& -0.04323517& 158573.1\end{array}\right]$(64)

Its eigenvalues are $40206821$, $12028$, $1500$ and $78$.

Figure 9

Log-Likelihood on capital data for the I(1) at the optimum as a function of $\rho$, $\mu$, $\nu$ and $\sigma$.

## References

• Bournay, J., and G. Laroque. 1979. “Réflexions sur la Méthodologies D’élaborations des Comptes Trimestriels.” Annales de l’Insee 36: 3–30. DOI: .

• Chow, Gregory C., and An-loh Lin. 1971. “Best Linear Unbiased Interpolation, Distribution, and Extrapolation of Time Series by Related Series.” The Review of Economics and Statistics 53 (4): 372–372. DOI: .

• Cuche, N., and M. Hess. Estimating Monthly GDP in a General Kalman Filter Framework: Evidence from Switzerland. Economic and Financial Modelling 2000:153–193. Google Scholar

• Denton, Frank T. Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization. Journal of the American Statistical Association 1971:99–102. DOI: .

• Di Fonzo, T. Temporal Disaggregation of Economic Time Series: Towards A Dynamic Extension. European Commission (Eurostat) Working Papers and Studies 2003, http://www.oecd.org/std/21781422.pdf. Google Scholar

• Eurostat. Handbook on quarterly national accounts. 2013., DOI: .

• Fernandez, Roque B. A Methodological Note on the Estimation of Time Series. The Review of Economics and Statistics 1981:471–476. DOI: .

• Friedman, Milton The Interpolation of Time Series by Related Series. Journal of the American Statistical Association 1962:729–757. DOI: .

• Grégoir, S. I. 1994. “Note on Temporal Disaggregation with Simple Dynamic Models.” Workshop on Quarterly National Accounts Proceedings 141–166. http://ec.europa.eu/eurostat/documents/3888793/5815741/KS-AN-03-014-EN.PDF/284f1001-fd36-4999-b007-a22033e8aaf9. Google Scholar

• Insee, Quarterly National Accounts Division. 2012. “Méthodologie des Comptes Trimestriels.” Insee méthode 126. https://www.insee.fr/fr/information/2571301. Google Scholar

• Litterman, Robert B. A Random Walk, Markov Model for the Distribution of Time Series. Journal of Business & Economic Statistics 1983:169–173. DOI: .

• Liu, H., and S. G. Hall. Creating high-frequency national accounts with state-space modelling: a Monte Carlo experiment. Journal of Forecasting 2001:441–449. DOI: .

• Proietti, Tommaso Temporal disaggregation by state space methods: Dynamic regression methods revisited. The Econometrics Journal 2006:357–372. DOI: .

• Salazar, E., R. Smith, and M. Weale. 1997. “Interpolation Using a Dynamic Regression Model: Specification and Monte Carlo Properties.” National Institute of Economic and Social Research Discussion Papers 126. https://www.niesr.ac.uk/publications/interpolation-using-dynamic-regression-model-specification-and-monte-carlo-properties. Google Scholar

• Santos Silva, J.M.C., and F.N. Cardoso. The Chow-Lin method using dynamic models. Economic Modelling 2001:269–280. DOI: .

• Whittle, P. Prediction and regulation by linear least-square methods, English Universities Press. 1963, http://www.jstor.org/stable/10.5749/j.ctttsphx. Google Scholar

## Footnotes

• 1

The most commonly known artifact is with white noise residuals, a jump every first quarter, which is recalled by Figure 3b.

• 2

$F$ may encompass more than one flow indicator (e.g. new registrations of trucks and a measure of purchases of new machineries) and can imperfectly measure the flows (hence the vector of coefficients $\gamma$ in eq. 2).

• 3

If both $S$ and $F$ were observed at high frequency, one would use eq. (2) to isolate the residual and maximize its likelihood. In an interpolation (or disaggregation) problem, only eq. (5) can be known about the residual and its likelihood maximization has to be adapted accordingly.

Published Online: 2017-08-26

Citation Information: Journal of Time Series Econometrics, Volume 10, Issue 1, 20160007, ISSN (Online) 1941-1928,

Export Citation

© 2018 Walter de Gruyter GmbH, Berlin/Boston.