Abstract
We propose a Bayesian vector autoregressive (VAR) model for mixedfrequency data. Our model is based on the meanadjusted parametrization of the VAR and allows for an explicit prior on the “steady states” (unconditional means) of the included variables. Based on recent developments in the literature, we discuss extensions of the model that improve the flexibility of the modeling approach. These extensions include a hierarchical shrinkage prior for the steadystate parameters, and the use of stochastic volatility to model heteroskedasticity. We put the proposed model to use in a forecast evaluation using US data consisting of 10 monthly and three quarterly variables. The results show that the predictive ability typically benefits from using mixedfrequency data, and that improvement can be obtained for both monthly and quarterly variables. We also find that the steadystate prior generally enhances the accuracy of the forecasts, and that accounting for heteroskedasticity by means of stochastic volatility usually provides additional improvements, although not for all variables.
1 Introduction
The vector autoregressive model (VAR) is a commonly used tool in applied macroeconometrics, in part because of its simplicity. Over the years, VAR models have developed in many different directions under both frequentist and Bayesian paradigms. The Bayesian approach offers the attractive ability to easily incorporate soft restrictions and shrinkage, which ameliorate the issue of overparametrization. Within the Bayesian framework itself, a large number of papers have developed prior distributions for the parameters in VAR models. Many of these are, in one way or another, variations of the Minnesota prior proposed by Litterman (1986) (see for example the book chapters Del Negro and Schorfheide 2011; Karlsson 2013). Gains in computational power have led to further alternatives in the choice of prior distribution as intractable posteriors can efficiently be sampled using Markov Chain Monte Carlo (MCMC) methods such as the Gibbs sampler (Gelfand and Smith 1990; Kadiyala and Karlsson 1997).
A particular development in the Bayesian VAR literature is the steadystate prior proposed by Villani (2009). The prior is based on a meanadjusted form of the VAR where the unconditional mean is explicitly parameterized. This seemingly innocuous reparametrization is justified by the fact that practitioners and analysts often have prior information regarding the steadystate (or unconditional mean) readily available. In the standard parametrization, a prior on the unconditional mean is only implicit as a function of the other parameters’ priors. Because the forecast in a stationary VAR converges to the unconditional mean as the horizon increases, a prior for the steadystate parameters can help retain the longrun forecast in the direction implied by theory, even if the model is estimated during a period of divergence.^{[1]}
Another modeling feature that modern VARs often include is stochastic volatility. In many macroeconomic applications, a typical characteristic of the data is that the volatility has varied over time. By fitting VARs with constant volatility, the estimated error covariance matrix attempts to balance periods of low and high volatility and find a compromise. Consequently, the predictive distribution does not account for the current level of volatility. Seminal contributions with respect to stochastic volatility were first made by Primiceri (2005); Cogley and Sargent (2005) and numerous followup studies have since documented the usefulness of stochastic volatility for forecasting, see e.g. work by Clark (2011); D’Agostino, Gambetti, and Giannone (2013); Clark and Ravazzolo (2015); Carriero, Clark, and Marcellino (2016). Because of the established utility thereof, we also allow for more flexibility in our model by modeling time variation in the error covariance matrix.
VARs are often estimated on a quarterly basis, see e.g. Stock and Watson (2001); Adolfson, Lindé, and Villani (2007). The reason is simply that many variables of interest are unavailable at higher frequencies, although the majority is often sampled monthly if not even more frequently. When the data are available at different frequencies, common practice is to aggregate highfrequency variables to the lowest frequency present. Such an aggregation incurs a loss of information for variables measured throughout the quarter: the aggregated quarterly values are typically weighted sums of the constituent months and so any information carried by a withinquarter trend or pattern will be disregarded by the aggregation. From a forecasting perspective an analyst will be unconsciously forced to disregard part of the information set when constructing a forecast from within a quarter as the most recent realizations are only available for the highfrequency variables. Another reason for utilizing higher frequencies of the data is that the number of observations is increased. A VAR estimated on data collected over, say, 10 years makes use of 120 observations of the monthly variables instead of being limited to the 40 aggregated quarterly observations.
Multiple approaches to dealing with the problem of mixed frequencies are available in the literature. Mixed data sampling (MIDAS) regressions and the MIDAS VAR proposed by Ghysels, Sinko, and Valkanov (2007) and Ghysels (2016), respectively, use fractional lag polynomials to regress a lowfrequency variable on lags of itself as well as highfrequency lags of other variables. This approach is predominantly frequentist, although Bayesian versions are available (Rodriguez and Puggioni 2010; Ghysels 2016). A second approach, which is the focus of this work, is to exploit the general ability of statespace modeling to handle missing observations (Harvey and Pierse 1984). Eraker et al. (2015), concerned with Bayesian estimation, used this idea to treat intraquarterly values of quarterly variables as missing data and proposed measurement and statetransition equations for the monthly VAR. Schorfheide and Song (2015) considered forecasting using a construction along the lines of Carter and Kohn (1994) and provided empirical evidence that the mixedfrequency VAR improved forecasts of 11 US macroeconomic variables as compared to a quarterly VAR. In terms of flexible timevarying models with mixedfrequency data, Cimadomo and D’Agostino (2016) employed the mixedfrequency VAR together with timevarying parameters and stochastic volatility to cope with a change in frequency of the data. Following up on the work by Schorfheide and Song (2015), Götz and Hauzenberger (2018) recently showed that more flexible models that include stochastic volatility tend to improve forecasts also within this framework.
The main contribution of this paper is that we extend the mixedfrequency toolbox by incorporating prior information on the steady states, and by adding stochastic volatility to the model. Thus, we effectively combine the steadystate parametrization of Villani (2009) with the statespace modeling approach for mixedfrequency data of Schorfheide and Song (2015) and the common stochastic volatility (CSV) model proposed by Carriero et al. (2016). The proposed model accommodates explicit modeling of the unconditional mean with data measured at different frequencies. In order to employ the model in a realistic forecasting situation, we use a realtime dataset consisting of 13 macroeconomic variables for the US, where 10 of the variables are sampled monthly, and the remaining three are available quarterly. We implement the steadystate prior using the standard Villani (2009) approach, and using the hierarchical structure presented by Louzis (2019). In our empirical application, we find that, for most variables, mixedfrequency data, stochastic volatility, and steadystate information improve forecasting accuracy as compared to models without any of the aforementioned features.
The structure of the paper is as follows. Section 2 describes the main methodology, Section 3 provides information about the data and details about the implementation, and Section 4 evaluates the forecasting performance. Section 5 concludes.
2 Combining Mixed Frequencies with SteadyState Beliefs
The mixedfrequency method adopted in this work is a state spacebased model which follows the work by Mariano and Murasawa (2010); Schorfheide and Song (2015); Eraker et al. (2015). There are several modeling approaches available for handling mixedfrequency data, including MIDAS (Ghysels et al. 2007), bridge equations (Baffigi, Golinelli, and Parigi 2004) and factor models (Mariano and Murasawa 2003; Giannone, Reichlin, and Small 2008). We do not review these further here, but instead refer the reader to the survey by Foroni and Marcellino (2013) and an early comparison conducted by Kuzin, Marcellino, and Schumacher (2011).
2.1 StateSpace Representation of the MixedFrequency Model
To cope with mixed observed frequencies of the data, we assume the system to be evolving at the highest available frequency. This assumption frames the problem of frequency mismatch as a missing data problem. By doing so, the approach naturally lends itself to a statespace representation of the system in which the underlying monthly series of the quarterly variables become the latent states of the system. Because we have a mix of monthly and quarterly frequencies in our empirical application, we will in the following proceed with the presentation of the model from this perspective. It should, however, be noted that other compositions of frequencies are viable within the same framework.
The VAR model at the core of the analysis is specified for the highfrequency and partially missing variables. More specifcally, a VAR(p) for the n × 1 vector z_{t} is employed such that
The model in (1) is a conventional VAR specification, but, in the spirit of Villani (2009), we instead employ the meanadjusted form as
However, common practice is to use (1) with a loose prior on Φ, which implicitly defines an intricate (but loose) prior on Ψ and, subsequently,
Next, we partition the highfrequency underlying process z_{t} as,
To distinguish between the underlying process and actual observations, we denote the latter by y_{t}. A consequence of all variables not being observed at every time point t is that the dimension n_{t} of y_{t} is not always equal to n = n_{m} + n_{q}. The observed data in y_{t} are generally supposed to be some linear aggregate of
We let M_{q,t} be the n_{q} identity matrix
The aggregation matrix Λ_{q} represents the assumed aggregation scheme of unobserved highfrequency latent observations z_{q,t} into occasionally observed lowfrequency observations y_{q,t}. To make the presentation simpler, we can write the bottom block of ΛZ_{t} as
Schorfheide and Song (2015), working with loglevels of the data, used the intraquarterly average
Finally, the expression can be written as
Because the set of weights in (4)(6) sum to three, we define our latent variable of interest to be
Equations (2) and (3) form a statespace model that can be used for estimation of the model. Schorfheide and Song (2015) suggested an efficient compact formulation of the employed statespace model that is statistically equivalent but computationally more convenient. The compact treatment is based on the observation that the set of monthly variables included in the model are observed for all time points except for a handful at the end of the sample, known as a ragged edge (Bańbura, Giannone, and Reichlin 2011). The treatment proposed by Schorfheide and Song (2015) is to let the monthly variables enter the model as exogenous for t = 1, …, T_{b}, where T_{b} denotes the final time period where the monthly variables are all observed. By this approach, the monthly variables are excluded from the state equation. The state dimension is thereby reduced from np to n_{q}(p + 1), which substantially improves the computational efficiency.
Let
The above statespace model remains valid as long as t ≤ T_{b}, implying that all of the monthly series are observed. To deal with ragged edges and unbalanced monthly data for t = T_{b} + 1, we follow Ankargren and Jonéus (2020) and adaptively add the monthly series with missing data as appropriate. Contrary to Schorfheide and Song (2015), we thereby avoid use of the full companion form altogether.
2.2 Extending the Basic SteadyState Model
The standard Bayesian VAR (BVAR) with the steadystate prior typically produces good forecasts and is for this reason used by e.g. Sveriges Riksbank as one of its main forecasting models (see Iversen et al. 2016). However, recent work in the VAR literature demonstrates that allowing for more flexibility may be beneficial. Particularly, letting the error covariance matrix in the model vary over time by incorporating stochastic volatility often improves the predictive ability as demonstrated by e.g. Clark (2011); Clark and Ravazzolo (2015); Carriero et al. (2016). Moreover, studies such as Bańbura, Giannone, and Reichlin (2010); Giannone, Lenza, and Primiceri (2015); Koop (2013) have shown that mediumsized models including 10–20 variables often outperform smaller models when forecasting. The caveat, however, when extending the size of the model under the use of the steadystate prior is that the researcher must set a prior mean and variance for the unconditional mean for each variable in the model. For key variables such as inflation, gross domestic product (GDP) growth and unemployment this task is relatively effortless, but it can be more challenging when the previous literature does not offer any guidance on reasonable prior specifications. To simplify the process of specifying the steadystate prior, Louzis (2019) developed a hierarchical prior for the steadystate prior that effectively relieves the researcher from eliciting the prior variances of the steadystate parameters. Instead, only prior means are required. Providing a sensible prior for the unconditional mean is generally much simpler than quantifying the uncertainty of one’s specification. We next briefly describe the stochastic volatility and hierarchical steadystate prior specifications that we extend our basic model with.
2.2.1 Stochastic Volatility
The stochastic volatility model we employ is the CSV model of Carriero et al. (2016), which is a parsimonious and simple approach for letting the error covariance matrix in the model vary over time. The state equation describing the highfrequency VAR is under the CSV variance specification given by
The logvolatility log(f_{t}) thus evolves as an AR(1) process without intercept with parameters (
2.2.2 Hierarchical SteadyState Priors
The appealing feature of the steadystate prior is that it allows the researcher to use readily available information about longrun steadystate levels of the included variables. For the reasons discussed earlier, Louzis (2019) proposed a hierarchical steadystate prior using the normalgamma construction used by e.g. Griffin and Brown (2010); Huber and Feldkircher (2019). The reason for such an approach is that the benefits of the steadystate prior are larger when we have accurate and relatively informative priors for the steady states. The normalgamma prior employs a hierarchical specification that provides sufficiently heavy tails to allow for a large degree of shrinkage to the prior mean when appropriate, and more flexibility otherwise. In effect, the researcher only has to provide a prior mean for each steadystate parameter as the associated variances are instead obtained from the hyperparameters higher up in the hierarchy.
To be more precise, the hierarchical steadystate prior is based on the normalgamma prior proposed by Griffin and Brown (2010) that employs a hierarchical specification given by
Griffin and Brown (2010) showed that the variance of the unconditional prior for
2.3 Prior Distributions
We use a standard normal inverse Wishart prior for the VAR coefficients and error covariance (Π, Σ). Thus, we have a priori
While Σ describes the fixed covariance structure, the timevarying volatility in the model is governed by the latent volatility f_{t}. For the two parameters associated with its evolution,
As discussed in Section 2.2, the priors for the steadystate parameters are normal conditional on the local shrinkage parameters. Instead of fixing the toplevel hyperparameters
2.4 Posterior Sampling
To estimate the model and produce forecasts, we employ Markov Chain Monte Carlo (MCMC). The MCMC algorithm consists of multiple Gibbs sampling steps, which we describe next. We relegate some of the details to Appendix A.
2.4.1 Sampling the Latent Monthly Variables
To sample from the posterior distribution of the latent monthly variables,
2.4.2 Sampling the Regression and Covariance Parameters
Given
By standard results (Kadiyala and Karlsson 1993, 1997), the conditional posterior distribution is also normal inverse Wishart. It is thereby possible to sample from the marginal posterior of Σ followed by the full conditional posterior of Π:
The posterior moments are standard given the transformation of the model and presented in Appendix A. A draw can efficiently be made from the posterior of Π by reverting to its matrixnormal form:
2.4.3 Sampling the SteadyState Parameters
Prior to sampling the steadystate parameters, the associated hyperparameters are drawn from their respective conditional posterior distributions. The conditional posterior of the global shrinkage parameter
The conditional posterior of
Given the hyperparameters, the local shrinkage parameters
Next, by dividing both sides of the model (7) by
The posterior moments provided by Villani (2009) therefore apply directly for the preceding transformation of the model. Let
The posterior distribution of ψ is
2.4.4 Sampling the Latent Volatility
Conditional on the other parameters in the model, we can obtain
Squaring and taking the logarithm of the elements of
The posteriors of the parameters of the volatility process are standard given f. The posterior distribution of ϕ is a truncated normal distribution whereas the posterior distribution of
3 Data and Implementation Details
In this section, we provide information about the data used and some details regarding the implementations.
3.1 Data
Our dataset consists of 13 key macroeconomic variables for the United States. The dataset we use largely parallels that of Carriero et al. (2016); Louzis (2019) with the exception that we use consumer price index (CPI) inflation as the sole measure of inflation. The data consist of 10 monthly and three quarterly variables and ranges over the period 1980M01–2018M12. Most of the included variables are available with realtime vintages in the ALFRED database. For variables not available in ALFRED, we turn to FRED and FREDMD (McCracken and Ng 2016). A summary of the data is provided in Table 1.
Series  Transformation  Frequency  Real time 



Nonfarm payrolls^{a}  1200∆ln  Monthly  Yes  3  0.5 
Hours^{ab}  X13, 1200∆ln  Monthly  ≥2011  3  0.5 
Unemployment rate^{a}  None  Monthly  Yes  6  1 
Federal funds rate^{a}  None  Monthly  Yes  5  0.7 
Bond spread^{b}  Monthly ave.  Monthly  Yes  1  1 
Stock market index^{c}  1200∆ln  Monthly  No  0  2 
Personal consumption^{a}  1200∆ln  Monthly  Yes  3  0.7 
Industrial production^{a}  1200∆ln  Monthly  Yes  3  0.7 
Capacity utilization^{a}  None  Monthly  Yes  80  0.7 
CPI inflation^{a}  1200∆ln  Monthly  Yes  2  0.5 
Nonresidential inv.^{a}  400∆ln  Quarterly  Yes  3  1.5 
Residential inv.^{a}  400∆ln  Quarterly  Yes  3  1.5 
GDP growth^{a}  400∆ln  Quarterly  Yes  2  0.5 
Sources: ^{a}ALFRED, Federal Reserve Bank of St. Louis.
^{b}FRED, Federal Reserve Bank of St. Louis.
^{c}FREDMD, McCracken and Ng (2016).
Notes: 1. Realtime data for Hours is available in ALFRED from 2011 and onwards; data from FRED is used prior to 2011. Hours is seasonally adjusted using X13ARIMASEATS using the seasonal package in R (Sax and Eddelbuettel 2018).
2. A list of the IDs of the variables is available in Appendix C.
We follow Louzis (2019) and transform the raw series to growth levels. For our monthly variables, we use monthonmonth growth rates, whereas the three quarterly variables are computed as quarteronquarter rates. All growth rates are annualized. The final two columns of Table 1,
We use realtime data where available throughout the forecasting exercise. To obtain a realistic pattern of available observations, we first consider the information set available on the 10th day of every month. Figure 1 displays the publication pattern during 2005–2018 and shows the number of months that has passed since the last available publication.
Figure 1:
Figure 1 shows a characteristic pattern for realtime forecasting of macroeconomic data. Data for financial and select real and nominal variables are already available for the previous month, whereas the previous month’s outcomes for some of the monthly variables are unknown. The pattern of availability displayed shows that consumption and inflation are available with a onemonth delay at every month except for a handful of occasions. Similarly, nonfarm employment, hours, unemployment and the federal funds rate are typically available with a zero month delay with the exception of a few months. In the final dataset that we use in our forecasting exercise, we make adjustments to the publication delays in order to obtain a more uniform dataset. The adjustments change the publication structure in the vintages so that the aforementioned variables have the same delay in all vintages, i. e. consumption and inflation are always observed wtih a delay of one month, whereas nonfarm employment, hours, unemployment and the federal funds rate are always observed without any delay. Consequently, at every month that we make our forecasts observations are available for the preceding month for six of the monthly variables, whereas four still lack data.
3.2 Implementation Details
The mixedfrequency models that we estimate use p = 12 lags following e.g. Bańbura et al. (2010). The overall tightness in the prior distribution for the regression parameters is set to
For the hierarchical steadystate prior, we let c_{0} = c_{1} = 0.01 in line with Huber and Feldkircher (2019); Louzis (2019). To set the scale of the proposal distribution for
For the parameters of the logvolatility process, we let the prior mean and standard deviation for ϕ be
4 Empirical Application: RealTime Forecasting of Key US Variables
In this section, we assess the forecasting ability of the model that we propose. The assessment is carried out by studying the outofsample predictive accuracy of the model based on the realtime dataset for the US that was discussed in Section 3.
4.1 Forecasting Setup
The quarterly steadystate Bayesian VAR model has been used in several previous studies, see for example Adolfson et al. (2007); Österholm (2008); Villani (2009); Clark (2011); Ankargren, Bjellerup, and Shahnazarian (2017). The model is employed both for policy purposes and for forecasting and is implemented in the Matlab toolbox BEAR developed at the European Central Bank (Dieppe, Legrand, and van Roye 2016). Our empirical application targets this audience, and our main interest lies in seeing whether the components we add to the model—mixed frequencies, stochastic volatility and hierarchical steady states—improve upon the benchmark model of Villani (2009) estimated on singlefrequency data. The forecasting results are also compared to models using Minnesotastyle normal inverse Wishart priors, i. e. without use of the steadystate component. A summary of the models that we include in the forecast evaluation is presented in Table 2.
Model  Description 

Benchmark  Singlefrequency model with the steadystate prior and a normal inverse Wishart prior for (Π, Σ), constant error covariance. Includes all 13 variables aggregated to the quarterly frequency or the 10 monthly variables depending on context. 
MinnIW  Normalinverse Wishart prior, constant error covariance 
MinnCSV  Normalinverse Wishart prior with common stochastic volatility 
SSIW  Steadystate prior with a normal inverse Wishart prior for (Π, Σ), constant error covariance 
SSCSV  Steadystate prior with a normal inverse Wishart prior for (Π, Σ) with common stochastic volatility 
SSNGIW  Hierarchical normalgamma steadystate prior with a normal inverse Wishart prior for (Π, Σ), constant error covariance 
SSNGCSV  Hierarchical normalgamma steadystate prior with a normal inverse Wishart prior for (Π, Σ) with common stochastic volatility 
The benchmark model is the steadystate model estimated on singlefrequency data. Depending on whether it serves as benchmark for quarterly or monthly variables, we include either the full set of variables (aggregated to the quarterly frequency) or the 10 monthly variables. The quarterly VAR uses p = 4, whereas for the monthly VAR p = 12.
We use a recursive forecasting scheme to evaluate the forecasting performance of the considered models. Beginning in January 2005, we estimate the models and make forecasts and then recursively add months to the set of data used for estimation. The benchmark models use the balanced data, whereas the mixedfrequency models automatically handle the ragged edges.
The forecasting ability of the models is evaluated with respect to both point and density forecasts. For point forecasts, we consider the root mean squared errors (RMSE). For density forecasts, we compute univariate and multivariate log predictive density scores (LPDS). We do so by fitting a normal density to the draws from the predictive distribution following e.g. Adolfson et al. (2007); Carriero et al. (2015). That is, we compute
What vintage the forecasts should be evaluated with respect to is an important question when using realtime data. Two alternatives are commonly employed in the literature. The first, as used by e.g. Romer and Romer (2000) and Clark (2011), is to use the second available vintage. This choice can be justified by acknowledging that revisions that occur after longer periods of time may be unforeseeable and more structural in nature by relating to e.g. definitions, methods of measurement, etc. The second available estimate therefore provides a less noisy estimate than the initial available value, yet is produced in the same environment as the forecaster is active. The second common approach for evaluation, as followed by e.g. Schorfheide and Song (2015), is to use the most recent vintage. For whatever reason revisions may have taken place, the currently available data provide the best estimates of e.g. inflation and output in previous years. We follow the latter approach and use the most recent vintage for evaluating the forecasts, but for transparency provide the main results of the evaluation using the second available vintage in Appendix D.
4.2 InSample Estimation
As a preliminary analysis, we begin by estimating the mixedfrequency VAR model using the steadystate (SS) and steadystate normalgamma (SSNG) priors to see whether the obtained steadystate posteriors differ. Because the longterm forecasts are largely determined by the steadystate posterior, seeing whether differences are present is of direct importance for forecasts beyond the immediate short term. Figure 2 displays kernel density estimates of the posterior distributions from the mixedfrequency model with CSV. As a point of reference, the figure includes the prior distribution detailed in Table 1.
Figure 2:
As expected, the posteriors in Figure 2 are for the most part similar. The modes of the posteriors are close to perfectly aligned for variables such as bond spread, inflation, residential investment and GDP. For others—e.g., hours, the federal funds rate and industrial production—the SSNG posteriors deviate more from both the priors and the SS posteriors.
While the steady states are of central importance for the levels of the forecasts, the precision thereof is highly influenced by the CSV factor. Figure 3 displays the mean of
Figure 3:
Figure 3 shows that there is little difference between the estimated volatility factors in the two steadystate models. Peaks of volatility are aligned and reach the same levels, while the level of the factor in the SSNG model is slightly higher in normal times. Both display the entrance into the Great Moderation in the beginning of the 1980s with heightened volatility again around the recent financial crisis. The interpretation of the level of the factor is that the timeinvariant elements in the error covariance matrix Σ have been scaled by f_{t}, which roughly amounts to an amplification by a factor of 4–6 during the recent financial crisis and a compression of around 0.5–0.75 in recent years. This feature has a direct effect on the width of the predictive distribution.
4.3 Forecast Evaluation
In this section, we present the main results of the forecast evaluation. For space considerations, the presentation includes the results from the joint evaluations as well as the univariate results for the three quarterly variables and the three monthly variables that are typically of primary interest: the inflation, federal funds and unemployment rates.
4.3.1 Joint forecasting results
Table 3 presents the results from the LPDS computed jointly. We compute the LPDS separately for the set of quarterly and monthly variables, respectively. The forecast horizons h in the table correspond to the frequency of the respective set of variables.
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative joint LPDS, Quarterly  
MinnIW  −0.11  −0.33^{*}  0.01  0.24  0.33  0.52  0.52  0.45  0.41 
SSIW  −0.17  −0.48^{*}  −0.22  −0.09  −0.08  0.05  0.02  −0.07  −0.13 
SSNGIW  −0.14  −0.42^{*}  −0.15  −0.02  −0.04  0.07  0.02  −0.09  −0.12 
MinnCSV  −0.36  −1.01^{**}  −0.76^{*}  −0.57^{*}  −0.49  −0.22  −0.05  −0.06  −0.05 
SSCSV  −0.42  −1.07^{**}  −0.89^{*}  −0.77^{*}  −0.74^{*}  −0.52  −0.39  −0.39  −0.39 
SSNGCSV  −0.43  −1.07^{**}  −0.86^{*}  −0.73^{*}  −0.69^{*}  −0.44  −0.32  −0.36  −0.36 
Relative joint LPDS, Monthly  
MinnIW  −1.74**  −1.49**  −1.21**  −1.04*  −1.14**  −1.03**  −1.02**  −0.88**  
SSIW  −1.85**  −1.44**  −1.12**  −1.04**  −1.14**  −1.04**  −1.05**  −0.95**  
SSNGIW  −1.83**  −1.47**  −1.19**  −1.03*  −1.22**  −1.12**  −1.08**  −0.95**  
MinnCSV  −1.96*  −2.93*  −3.01*  −3.03*  −2.98*  −2.77*  −2.62*  −2.29*  
SSCSV  −2.17*  −3.01*  −3.00*  −3.07*  −3.13*  −3.01*  −2.98*  −2.65*  
SSNGCSV  −2.07*  −3.07*  −3.03*  −3.09*  −3.13*  −2.97*  −2.86*  −2.53* 
Note: The forecast horizons h refer to quarters and months, respectively, for the two sets of variables. The scores in the table display the score of the model in the first column minus the score of the benchmark model, whereby negative entries indicate that the mixedfrequency model is superior. Bold entires show the minimum in each column. The benchmark model for the quarterly set of variables is a VAR(4) including all 13 variables aggregated to the quarterly frequency. For the monthly LPDS, the benchmark model is a VAR(12) including the the 10 monthly variables. For both cases, the steadystate prior with a constant error covariance matrix is used. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey, Leybourne, and Newbold 1997.
Across all horizons and sets of variables, SSCSV and SSNGCSV dominate with only one exception in which MinnCSV does slightly better than SSCSV. For the quarterly sets of variables, SSCSV outperforms the other models for h > 0 with the SSNGCSV model ranking first for the nowcast. MinnCSV ranks higher than the constant volatility models for the initial horizons, but for the longterm forecasts the added value of the steadystate prior outweighs the improvements obtained from stochastic volatilities. However, given a model, stochastic volatility appears to be useful as it improves the joint forecasting performance of quarterly variables across the board when comparing the constant volatility models to their heteroskedastic counterparts. Within the two groups of models with constant and stochastic volatility, we see that the steadystate models forecast better than MinnIW and MinnCSV, respectively, throughout all horizons. Therefore, the table shows that steadystate information and flexible modeling of the volatility structure help to improve the quarterly forecasts.
For the performance of the monthly forecasts, the picture is largely the same. The three models with stochastic volatility outperform the constant models for all horizons and SSNGCSV produces the most accurate density forecasts for h = 2, 3, 4. For the remaining horizons, SSCSV picks up the lead. Among the constant volatility models, the ranking is no longer uniform across horizons.
With respect to the joint log predictive scores, we can therefore conclude the following. First, there are gains in utilizing prior information on the steady states. Second, further improvements can be obtained by allowing for stochastic volatility. Third, with a handful of exceptions for the quarterly forecasts made by MinnIW and SSIW, the relative LPDS is negative throughout, indicating that the mixedfrequency models produce better density forecasts than the singlefrequency benchmarks. The three points are in line with the previous literature and can be seen as a synthesis of the conclusions made by Villani (2009); Clark (2011); Schorfheide and Song (2015); Carriero et al. (2016); Louzis (2019).
4.3.2 Quarterly Univariate Forecasting Results
Tables4–6 present the univariate LPDS and RMSE for the three quarterly variables GDP, Residential investment and Nonresidential investment.
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.23^{*}  −0.09  0.11  0.18  0.14  0.18  0.14  0.11  0.08 
SSIW  −0.23^{*}  −0.13  0.05  0.06  0.00  0.01  −0.02  −0.05  −0.10^{*} 
SSNGIW  −0.23^{*}  −0.10  0.10  0.15  0.10  0.11  0.04  0.01  −0.04 
MinnCSV  −0.27  −0.24^{*}  0.01  0.15  0.16  0.20  0.19  0.16  0.09 
SSCSV  −0.27  −0.26^{*}  −0.02  0.10  0.08  0.12  0.10  0.06  0.00 
SSNGCSV  −0.27  −0.25^{*}  0.00  0.13  0.14  0.17  0.14  0.10  0.03 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.90^{**}  0.95^{*}  1.05  1.11  1.07  1.10  1.07  1.06  1.04 
SSIW  0.90^{**}  0.94^{*}  1.03  1.05  1.00  1.01  0.98  0.96^{*}  0.95^{*} 
SSNGIW  0.90^{**}  0.94^{*}  1.05  1.10  1.05  1.07  1.03  1.01  0.99 
MinnCSV  0.92  0.95  1.03  1.13  1.10  1.12  1.08  1.06  1.03 
SSCSV  0.92  0.94  1.01  1.08  1.02  1.04  1.01  0.97  0.95 
SSNGCSV  0.92  0.94  1.02  1.12  1.08  1.09  1.05  1.01  0.99 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the Diebold–Mariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  0.07  0.00  0.22  0.16  0.14  0.12  0.09  0.14  0.18 
SSIW  −0.01  −0.08  0.09  0.02  −0.01  −0.03  −0.08  −0.04  −0.01 
SSNGIW  0.03  −0.03  0.12  0.00  −0.09  −0.18  −0.25^{*}  −0.22^{*}  −0.19^{*} 
MinnCSV  −0.10  −0.49^{*}  −0.38^{*}  −0.42^{*}  −0.44^{*}  −0.38^{*}  −0.36^{*}  −0.32^{*}  −0.28^{*} 
SSCSV  −0.17  −0.54^{*}  −0.46^{*}  −0.53^{*}  −0.56^{**}  −0.53^{*}  −0.53^{*}  −0.48^{*}  −0.46^{*} 
SSNGCSV  −0.17  −0.54^{*}  −0.43^{*}  −0.51^{*}  −0.55^{**}  −0.53^{*}  −0.56^{*}  −0.52^{*}  −0.51^{*} 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.92^{**}  0.96^{**}  1.03  1.02  1.01  1.03  1.03  1.06  1.06 
SSIW  0.90^{**}  0.92^{**}  0.99  0.97  0.97  0.98  0.99^{*}  1.01  1.01 
SSNGIW  0.92^{**}  0.94^{**}  1.00  0.98  0.96  0.97  0.96  0.98  0.98 
MinnCSV  0.88^{**}  0.90^{**}  0.96  0.94  0.94^{*}  0.96  0.95^{*}  0.98  1.00 
SSCSV  0.87^{**}  0.90^{**}  0.95  0.91^{*}  0.92^{*}  0.93  0.92^{*}  0.95  0.96 
SSNGCSV  0.88^{**}  0.90^{**}  0.95  0.93  0.92^{*}  0.94  0.92^{*}  0.95  0.96 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.09^{*}  −0.38^{**}  −0.16^{*}  0.01  0.13  0.24  0.28  0.21  0.16 
SSIW  −0.10^{*}  −0.42^{**}  −0.22^{*}  −0.09  −0.02  0.08  0.11  0.03  −0.03 
SSNGIW  −0.09^{*}  −0.41^{**}  −0.20^{*}  −0.05  0.05  0.16  0.20  0.11  0.05 
MinnCSV  −0.12  −0.45^{**}  −0.24  −0.06  0.06  0.21  0.32  0.31  0.26 
SSCSV  −0.11  −0.47^{**}  −0.32^{*}  −0.17  −0.06  0.06  0.16  0.13  0.09 
SSNGCSV  −0.12^{*}  −0.46^{**}  −0.29^{*}  −0.14  −0.02  0.13  0.25  0.22  0.18 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.97  0.83^{**}  0.93  0.98  1.02  1.09  1.12  1.11  1.08 
SSIW  0.95  0.82^{**}  0.92^{*}  0.95  0.98  1.03  1.04  1.00  0.98^{*} 
SSNGIW  0.97  0.83^{**}  0.91^{*}  0.95  0.99  1.05  1.08  1.06  1.03 
MinnCSV  0.93  0.83^{**}  0.93  1.01  1.07  1.15  1.23  1.22  1.19 
SSCSV  0.94  0.81^{**}  0.88^{*}  0.94  0.99  1.04  1.08  1.06  1.03 
SSNGCSV  0.93  0.81^{**}  0.90  0.97  1.03  1.10  1.17  1.16  1.13 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Starting with GDP, a somewhat different pattern than what was seen for the joint LPDS emerges. For both evaluation metrics, SSIW is generally the better forecaster beyond the short term and is only outperformed by CSV models at the first three horizons and in terms of density forecasts. Table 4 shows that the mixedfrequency models do better than the quarterly benchmark for the immediate short term when either nowcasting the current quarter or forecasting the next quarter. Beyond the first quarter forecast, the quarterly model generally produces more accurate forecasts. A similar result is found by Schorfheide and Song (2015). Use of the steadystate prior results in more accurate forecasts at every horizon, but whether or not a hierarchical prior formulation and stochastic volatility provide improvements varies. The homoskedastic steadystate models outperform the MinnIW model at all horizons, and the stochastic volatility steadystate models consistently forecast GDP growth more accurately than MinnCSV.
For residential investment, Table 5 presents forecasting results that more closely resemble the joint results. SSCSV and SSNGCSV dominate for all horizons, although the difference with respect to MinnCSV is occasionally small, particularly for the point forecasts. Nevertheless, both steadystate models with stochastic volatility perform well with better scores than all other models for every horizon and with respect to both point and density forecasts.
Finally, Table 6 shows the forecast evaluation for Nonresidential investment. The pattern displayed in Table 6 is a mix of the patterns in Tables 4–5. For the nowcast, MinnCSV provides better forecasts than the others, whereas SSCSV generally does well and ranks first for horizons 1–5 with respect to the density forecasts. The utility of the steadystate prior is clear from Table 6: while MinnCSV and MinnIW start out well, the performance deteriorates more rapidly with h than what is manifested by the other models employing information about the steady states. We can again see that both SSCSV and SSNGCSV dominate MinnCSV for all h > 0.
4.3.3 Monthly Univariate Forecasting Results
Moving to the monthly variables, Table 7 presents the forecast evaluation for inflation. The results indicate that there is little to gain from using the mixedfrequency VAR for forecasting monthly inflation as compared to a monthly VAR. The relative RMSE is close to unity and few of the DieboldMariano tests of equal predictive ability indicate any difference between the benchmark and the mixedfrequency models. Use of stochastic volatility improves the density forecasts somewhat, whereas the quality of the point forecasts deteriorates.
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  0.07  −0.09  −0.02  −0.01  0.01  −0.02  −0.01  −0.01  −0.02 
SSIW  0.06  −0.10  0.00  0.00  0.03  −0.00  −0.00  0.00  −0.01 
SSNGIW  0.05  −0.10  −0.01  −0.01  0.04  −0.01  −0.04  −0.02  −0.03^{*} 
MinnCSV  −0.22  −0.42  −0.31  −0.33  −0.34  −0.30  −0.29  −0.32  −0.30 
SSCSV  −0.22  −0.43  −0.32  −0.31  −0.33  −0.32  −0.31  −0.36  −0.32 
SSNGCSV  −0.22  −0.43  −0.31  −0.32  −0.33  −0.29  −0.33  −0.36  −0.34 
Relative RMSE (model in first column/benchmark)  
MinnIW  1.02  0.98  0.99  0.99  1.00  1.00  1.00  1.00  1.00 
SSIW  1.02  0.98  1.00  1.00  1.01  1.01  1.00  1.00  1.00 
SSNGIW  1.02  0.98  0.99  0.99  1.00  1.00  1.00  1.00  1.00 
MinnCSV  1.05  0.99  0.97  0.96  0.97  0.97  0.97^{*}  0.98^{*}  0.99 
SSCSV  1.04  0.98  0.97  0.97  0.97^{*}  0.97  0.97^{*}  0.97^{*}  0.98 
SSNGCSV  1.04  0.99  0.97  0.96  0.97  0.97  0.97^{*}  0.98^{*}  0.98 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the Diebold–Mariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Next, the evaluation of the forecasts of the federal funds rate is displayed in Table 8. In contrast to the results for inflation, we here find large benefits from using the mixedfrequency models for forecasting the monthly federal funds rate. All three models with stochastic volatility do well with respect to both density and point forecasts, but the steadystate models have a small edge across most horizons. Contrasting these results with the results for inflation in Table 7, we now find larger improvements from using stochastic volatility. We interpret this result as an indication that the federal funds rate has been more volatile than the inflation rate relative to constant historical levels of volatility. In addition, the improved accuracy of the forecasts obtained from the mixedfrequency models highlight the importance of utilizing all realtime information that is available. As explained in Section 4.1, the mixedfrequency VAR automatically handles ragged edges of the data, whereas the singlefrequency benchmark is estimated on the balanced data set. For some variables, e.g., the federal funds rate, this evidently makes a difference. Schorfheide and Song (2015) reached the same conclusion when forecasting quarterly growth rates of monthly variables. A likely explanation as to why this appears to matter more for the federal funds rate is because of its high persistence.
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.91^{**}  −0.50^{**}  −0.32^{*}  −0.24^{*}  −0.20  −0.18  −0.17  −0.17 
SSIW  −0.93^{**}  −0.53^{**}  −0.35^{**}  −0.27^{**}  −0.23^{*}  −0.21^{*}  −0.21^{*}  −0.21^{*} 
SSNGIW  −0.92^{**}  −0.52^{**}  −0.35^{**}  −0.27^{**}  −0.23^{**}  −0.21^{*}  −0.20^{*}  −0.20^{*} 
MinnCSV  −1.45^{**}  −1.04^{**}  −0.80^{**}  −0.63^{**}  −0.52^{**}  −0.44^{**}  −0.38^{**}  −0.34^{*} 
SSCSV  −1.47^{**}  −1.06^{**}  −0.82^{**}  −0.64^{**}  −0.53^{**}  −0.45^{**}  −0.39^{**}  −0.35^{**} 
SSNGCSV  −1.46^{**}  −1.05^{**}  −0.81^{**}  −0.64^{**}  −0.52^{**}  −0.44^{**}  −0.37^{**}  −0.34^{**} 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.58^{**}  0.75^{*}  0.85  0.90  0.93  0.94  0.94  0.94 
SSIW  0.56^{**}  0.72^{*}  0.81^{*}  0.86^{*}  0.89^{*}  0.90^{*}  0.90^{*}  0.90^{*} 
SSNGIW  0.57^{**}  0.72^{*}  0.81^{*}  0.86^{*}  0.89^{*}  0.90^{*}  0.91^{*}  0.90^{**} 
MinnCSV  0.53^{**}  0.68^{*}  0.76^{*}  0.81  0.84  0.86  0.87  0.87 
SSCSV  0.51^{**}  0.65^{*}  0.73^{*}  0.78^{*}  0.82^{*}  0.83^{*}  0.85^{*}  0.85 
SSNGCSV  0.52^{**}  0.67^{*}  0.75^{*}  0.80^{*}  0.83^{*}  0.84^{*}  0.85  0.86 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
The final series we evaluate univariate forecasts for is the unemployment rate. The results are presented in Table 9. The table reveals that mixedfrequency models are useful also for forecasting unemployment. The results mirror those from the federal funds rate, displaying the importance of the ragged edge used by the mixedfrequency models. SSIW appears to be the better forecaster in terms of point forecasts, whereas SSCSV provides more accurate density forecasts for all horizons. Thus, adding stochastic volatility does not improve point forecasts of the unemployment rate, but the density forecasts exhibit enhancements. We interpret these results as indications that the stochastic volatility alternatives better characterize the evolution of the unemployment rate. The result is not surprising—Clark (2011), for example, found that the effects of stochastic volatility were more pronounced for density than for point forecasts.
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.43  −0.34  −0.25  −0.28  −0.34  −0.30  −0.32^{*}  −0.29 
SSIW  −0.46  −0.39  −0.31  −0.35  −0.41^{*}  −0.38  −0.40  −0.37 
SSNGIW  −0.43  −0.33  −0.21  −0.24  −0.29  −0.25  −0.26  −0.22 
MinnCSV  −0.48  −0.48  −0.51  −0.61  −0.74  −0.77  −0.82  −0.84 
SSCSV  −0.49  −0.50  −0.52  −0.63  −0.77  −0.80  −0.87  −0.89 
SSNGCSV  −0.47  −0.47  −0.49  −0.59  −0.72  −0.74  −0.80  −0.82 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.78^{**}  0.83^{*}  0.87^{*}  0.87  0.86  0.88  0.89  0.90 
SSIW  0.77^{**}  0.81^{*}  0.84  0.84  0.84  0.86  0.87  0.88 
SSNGIW  0.79^{**}  0.84^{*}  0.87^{*}  0.87  0.87  0.89  0.89  0.91 
MinnCSV  0.82^{**}  0.87^{*}  0.90  0.90  0.89  0.89  0.90  0.91 
SSCSV  0.81^{**}  0.86^{*}  0.89  0.89  0.88  0.89  0.89  0.90 
SSNGCSV  0.83^{*}  0.88^{*}  0.91  0.91  0.90  0.91  0.91  0.92 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
4.3.4 Comparison with a Model with EquationSpecific Volatilities
We next compare the results with a more complex model to see whether the CSV we have assumed is restrictive or serves as an efficient approximation. The extended model, labeled SSNGSV, is characterized by
The results in Table 10 show that using CSV leads to both gains and losses in terms of predictive ability. The SSNGSV model improves the density forecasts for residential investment and inflation at the h = 0 horizon, while the gain for GDP forecasts is negligible and for nonresidential investment even negative. However, for horizons h > 0 the model with CSV produces better density forecasts for residential investment. For the federal funds rate, use of a separate volatility factor improves the density forecast substantially. The results for the point forecasts are generally in line with those for the density forecasts, although the Diebold–Mariano test no longer indicates a difference between the two models’ predictive abilities with respect to the federal funds rate. When focusing on the point forecasts, the model with CSV is no longer inferior forecasting residential investment at the h = 0 horizon; instead, the Diebold–Mariano test signals rejection in favor of its superiority.
Variable  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (SSNGSV − SSNGCSV)  
GDP  −0.07  0.07  0.10  0.07  0.03  0.06  0.06  0.08  0.08 
Nonresidential investment  0.05  0.09  0.00  −0.00  −0.03  −0.05  −0.08  −0.08  −0.09 
Residential investment  −0.43^{**}  0.32^{*}  0.44^{*}  0.34^{*}  0.27  0.14  0.15  0.13  0.09 
Inflation  −0.15^{*}  −0.11  −0.17  −0.17  −0.08  −0.04  0.10  0.10  0.06 
Federal funds rate  −1.33^{**}  −1.14^{*}  −1.00  −0.83  −0.71  −0.65  −0.58  −0.53  
Unemployment  −0.01  0.04  0.13  0.20  0.26  0.32  0.36  0.39  
Relative RMSE (SSNGSV/SSNGCSV)  
GDP  0.95  0.99  1.05^{*}  1.06  1.05  1.07^{*}  1.09^{*}  1.11^{*}  1.12^{*} 
Nonresidential investment  1.00  1.00  0.98  0.99  0.98  0.98  0.97  0.97  0.97 
Residential investment  1.06^{*}  1.06^{**}  1.06^{*}  1.04^{*}  1.00  0.98  1.00  0.99  0.97 
Inflation  0.99  0.99  1.00  1.01  1.01  1.01  1.01^{*}  1.01^{*}  1.00 
Federal funds rate  0.86  0.87  0.88  0.90  0.92  0.94  0.96  0.99  
Unemployment  0.94^{*}  0.93  0.94  0.94  0.95  0.97  0.98  0.98 
Note: The forecast horizon h represents quarters for GDP, Nonresidential investment and Residential investment, and months otherwise. Negative LPDS entries indicate that the SSNGSV model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
The results thus neither strongly support nor strongly oppose the use of a model with CSV. Table 8 already illustrates the gain from using timevarying volatility for forecasting the federal funds rate, and Table 10 shows how a volatility model that is unique for the federal funds rate equation improves these forecasts even further. Given that the CSV model sacrifices some flexibility for improved computational speed, it is natural that some variables, whose volatility patterns differ relatively more from the average pattern captured by the common volatility factor, would benefit from an idiosyncratic volatility factor. But, with the exception of the federal funds rate, the differences in predictive ability displayed in Table 10 are either small or in favor of the model with CSV.
4.3.5 Forecasts Evaluated against the Second Vintage
To ensure that our results are not primarily driven by our choice of data to evaluate the forecasts against, Appendix D presents the same tables as shown in the main text but with the evaluations carried out with the second available vintages. Qualitatively, the results remain. For the forecasts of GDP, the gains obtained by using mixedfrequency data are larger when the forecasts are evaluated against the second vintage. Occasional changes in rankings among the models occur across variables, but for the most part the rankings remain unaltered and the conclusions made so far are intact irrespective of the choice of evaluation vintage.
5 Conclusion
We present a VAR model that is a synthesis of recent important contributions. Our model incorporates three main features. First, the model allows for mixedfrequency data by use of a statespace formulation. We deal with the particular mixedfrequency case involving monthly and quarterly data and solve the frequency mismatch problem by postulating a monthly VAR with missing values similar to the work by Schorfheide and Song (2015). Second, we include prior beliefs about the steady states, or unconditional means, of the variables in the model by means of the steadystate prior developed by Villani (2009). We also employ the hierarchical formulation of the prior proposed by Louzis (2019), whose advantage is that it is only necessary to specify prior means of the steadystate parameters while the prior variances are, in turn, equipped with hyperpriors. Third, to allow for an error covariance matrix that varies over time we include as the final component the CSV model presented by Carriero et al. (2016).
We estimate our model and competing alternatives using US data including 10 monthly and three quarterly variables. The results show that the forecasts are generally improved by adding the three components to the benchmark VAR model. Using mixed instead of single frequencies of the data generally does not produce worse forecasts, and usually performs better. Including prior information about the steady states generally outperforms the corresponding alternatives that lack this information. The hierarchical steadystate prior is appealing as it allows for shrinkage to the prior means of the steady states, and is generally on par with or better than the standard steadystate prior. Finally, we find that CSV mostly improves the accuracy of the forecasts as the models including heteroskedasticity generally outperform the models with constant volatility.
Funding source: Jan Wallanders och Tom Hedelius Stiftelse samt Tore Browaldhs Stiftelse
Award Identifier / Grant number: P20160293:1
A Posterior Moments
Regression and Covariance Parameters
The moments of the posterior distributions for the regression and covariance parameters are:
Latent Volatility
Let
The conditional posterior distribution of
B Details on Model with Extended Stochastic Volatility Specification
We now use an independent normal prior for the regression parameters with a prior variance structure given by
C Data Sources
The IDs of the series used and their sources are shown in Table 11.
Series  Source  ID 

Nonfarm payrolls  ALFRED  PAYEMS 
Hours  FRED/ALFRED  CEU0500000034 
Unemployment rate  ALFRED  UNRATE 
Federal funds rate  ALFRED  FEDFUNDS 
Bond spread  ALFRED  T10YFF 
Stock market index  FREDMD  S&P500 
Personal consumption  ALFRED  PCE 
Industrial production  ALFRED  INDPRO 
Capacity utilization  ALFRED  TCU 
CPI inflation  ALFRED  CPIAUCSL 
Nonresidential inv.  ALFRED  PNFI 
Residential inv.  ALFRED  PRFI 
GDP growth  ALFRED  GDPC1 
D Forecast Evaluation Tables (Second Vintage)
The tables in the main text present the results of the forecast evaluation when evaluated with respect to the most recent vintage. The tables in this Appendix (Table 12–16) present the same evaluations but conducted with respect to the second available vintage. Because the federal funds rate is not revised, the second vintage is the same as the most recent vintage. Therefore, the results when evaluating the forecasts against the second vintage are identical to Table 8 and therefore not reproduced again here.
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.34^{**}  −0.15  0.03  0.12  0.14  0.13  0.09  0.09  0.09 
SSIW  −0.31^{*}  −0.16^{*}  0.01  0.04  0.04  −0.00  −0.05^{*}  −0.05^{**}  −0.05 
SSNGIW  −0.33^{**}  −0.15  0.03  0.11  0.12  0.08  0.02  0.01  0.00 
MinnCSV  −0.28  −0.27^{**}  −0.06  0.10  0.17  0.19  0.17  0.17  0.16 
SSCSV  −0.24  −0.28^{**}  −0.05  0.08  0.14  0.14  0.10  0.10  0.10 
SSNGCSV  −0.26  −0.27^{**}  −0.05  0.10  0.17  0.19  0.15  0.14  0.11 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.83^{**}  0.92^{**}  1.03  1.09  1.11  1.10  1.07  1.07  1.06 
SSIW  0.84^{**}  0.92^{**}  1.03  1.06  1.04  1.00  0.97  0.96  0.96 
SSNGIW  0.83^{**}  0.92^{**}  1.03  1.08  1.10  1.06  1.03  1.02  1.01 
MinnCSV  0.85^{**}  0.90^{*}  0.98  1.09  1.14  1.12  1.08  1.08  1.06 
SSCSV  0.87^{**}  0.90^{**}  0.97  1.05  1.07  1.04  1.00  0.99  0.99 
SSNGCSV  0.85^{**}  0.90^{**}  0.97  1.08  1.12  1.10  1.05  1.04  1.02 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  0.01  −0.11^{*}  0.14  0.14  0.14  0.11  0.08  0.14  0.18 
SSIW  −0.05  −0.18^{*}  0.03  0.01  −0.01  −0.05  −0.09  −0.06  −0.01 
SSNGIW  −0.00  −0.12^{*}  0.07  0.01  −0.06  −0.16  −0.22^{*}  −0.20^{*}  −0.15 
MinnCSV  −0.18  −0.53^{*}  −0.38^{*}  −0.36^{*}  −0.37^{*}  −0.31^{*}  −0.28^{*}  −0.23  −0.17 
SSCSV  −0.24  −0.56^{*}  −0.44^{*}  −0.45^{*}  −0.46^{*}  −0.44^{*}  −0.43^{*}  −0.38^{*}  −0.33 
SSNGCSV  −0.23  −0.56^{*}  −0.42^{*}  −0.43^{*}  −0.46^{*}  −0.43^{*}  −0.45^{*}  −0.41^{*}  −0.37 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.93^{*}  0.96^{*}  1.03  1.02  1.02  1.03  1.04  1.06  1.06 
SSIW  0.90^{*}  0.92^{**}  0.99  0.98  0.97  0.98  0.98^{*}  1.00  1.01 
SSNGIW  0.92^{*}  0.95^{*}  1.01  0.98  0.97  0.97  0.96  0.98  0.98 
MinnCSV  0.89^{*}  0.91^{**}  0.96  0.94  0.95^{*}  0.96  0.96  1.00  1.01 
SSCSV  0.88^{**}  0.91^{**}  0.95  0.91^{*}  0.92^{*}  0.93  0.93^{*}  0.96  0.97 
SSNGCSV  0.88^{*}  0.91^{**}  0.96  0.93  0.93^{*}  0.94  0.93  0.97  0.97 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.07  −0.52^{*}  −0.08  0.05  0.19  0.34  0.38  0.29  0.23 
SSIW  −0.11^{*}  −0.62^{*}  −0.21^{*}  −0.11  −0.03  0.09  0.12  0.02  −0.04 
SSNGIW  −0.09^{*}  −0.57^{*}  −0.15^{*}  −0.02  0.10  0.24  0.27  0.16  0.10 
MinnCSV  −0.16^{*}  −0.74^{*}  −0.37  −0.17  −0.04  0.18  0.33  0.29  0.25 
SSCSV  −0.19^{*}  −0.81^{*}  −0.49  −0.33  −0.22  −0.04  0.08  0.03  0.01 
SSNGCSV  −0.17^{*}  −0.78^{*}  −0.44  −0.27  −0.14  0.09  0.23  0.17  0.14 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.99  0.84^{*}  0.96  0.99  1.05  1.11  1.14  1.12  1.11 
SSIW  0.96  0.81^{*}  0.94^{*}  0.95  0.98  1.03  1.03  0.99  0.99 
SSNGIW  0.98  0.83^{*}  0.95^{*}  0.97  1.01  1.08  1.09  1.07  1.06 
MinnCSV  0.96  0.84^{*}  0.96  1.01  1.09  1.18  1.24  1.23  1.21 
SSCSV  0.94  0.81^{*}  0.92^{*}  0.95  1.01  1.06  1.10  1.07  1.07 
SSNGCSV  0.95  0.82^{*}  0.94  0.98  1.05  1.14  1.19  1.17  1.16 
Note: The forecast horizon h denotes quarters. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(4) including all 13 variables aggregated to the quarterly frequency using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  0.06  −0.07  −0.02  −0.01  0.01  −0.02  −0.02  −0.02  −0.03 
SSIW  0.06  −0.09  −0.00  −0.00  0.03  −0.00  −0.01  −0.00  −0.01 
SSNGIW  0.05  −0.08  −0.01  −0.01  0.03  −0.01  −0.04  −0.02^{*}  −0.04 
MinnCSV  −0.03  −0.32  −0.27  −0.31  −0.32  −0.31  −0.31  −0.36  −0.34 
SSCSV  −0.05  −0.34  −0.28  −0.29  −0.33  −0.33  −0.34  −0.40  −0.37 
SSNGCSV  −0.04  −0.34  −0.27  −0.30  −0.32  −0.30  −0.35  −0.40  −0.38 
Relative RMSE (model in first column/benchmark)  
MinnIW  1.02  0.98  0.99  0.99  1.00  1.00  1.00  1.00  1.00 
SSIW  1.01  0.98  0.99  1.00  1.01  1.01  1.00  1.00  1.00 
SSNGIW  1.02  0.98  0.99  0.99  1.00  1.00  1.00  1.00  1.00 
MinnCSV  1.04  0.99  0.98  0.97  0.97  0.97  0.98^{*}  0.98^{*}  0.98 
SSCSV  1.04  0.99  0.98  0.97  0.97^{*}  0.98  0.98^{*}  0.98^{*}  0.98 
SSNGCSV  1.04  0.99  0.98  0.97  0.97  0.97  0.98^{*}  0.98^{*}  0.98 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.57  −0.43  −0.28  −0.29  −0.35^{*}  −0.32  −0.32^{*}  −0.29 
SSIW  −0.60  −0.48  −0.35  −0.36  −0.43^{*}  −0.40  −0.41  −0.38 
SSNGIW  −0.56  −0.41  −0.25  −0.25  −0.30  −0.26  −0.26  −0.21 
MinnCSV  −0.54  −0.48  −0.46  −0.54  −0.68  −0.72  −0.78  −0.80 
SSCSV  −0.56  −0.50  −0.48  −0.56  −0.71  −0.75  −0.82  −0.84 
SSNGCSV  −0.53  −0.46  −0.44  −0.51  −0.65  −0.69  −0.76  −0.78 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.74^{**}  0.81^{**}  0.86^{*}  0.87  0.86  0.87  0.88  0.90 
SSIW  0.73^{**}  0.79^{*}  0.83^{*}  0.84  0.84  0.85  0.86  0.88 
SSNGIW  0.75^{**}  0.81^{**}  0.87^{*}  0.87  0.87  0.88  0.89  0.91 
MinnCSV  0.79^{**}  0.85^{*}  0.90^{*}  0.90  0.89  0.89  0.90  0.91 
SSCSV  0.78^{**}  0.84^{*}  0.89^{*}  0.89  0.88  0.88  0.89  0.90 
SSNGCSV  0.79^{**}  0.86^{*}  0.91  0.92  0.90  0.91  0.91  0.92 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
E Additional Results
This Appendix presents forecast evaluation tables (Table 17–23) for the variables not discussed in the main text.
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.11  −0.12  −0.05  −0.10  −0.04^{*}  −0.11  −0.06  −0.06 
SSIW  −0.08  −0.11  −0.03  −0.10  −0.04^{*}  −0.11  −0.06  −0.06 
SSNGIW  −0.11  −0.12  −0.04  −0.12^{*}  −0.04  −0.11  −0.07  −0.05 
MinnCSV  0.08  −0.10  −0.01  −0.17  −0.12  −0.08  0.01  0.04 
SSCSV  0.13  −0.08  −0.01  −0.16  −0.14  −0.13  −0.04  0.01 
SSNGCSV  0.08  −0.09  −0.01  −0.19  −0.15  −0.12  −0.02  0.02 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.96^{*}  0.95^{*}  0.98  0.96^{*}  0.99  0.98  0.98  0.98 
SSIW  0.96^{*}  0.96^{*}  0.98  0.96^{*}  0.99^{*}  0.98  0.98  0.98 
SSNGIW  0.95^{*}  0.95^{*}  0.98  0.96^{*}  0.99^{*}  0.98  0.98  0.98 
MinnCSV  0.97  0.96  0.97  0.95^{*}  0.97  0.99  0.99  0.99 
SSCSV  0.98  0.97  0.98  0.95^{*}  0.98^{*}  0.98  0.98  0.99 
SSNGCSV  0.97  0.96  0.97  0.95^{*}  0.97  0.98  0.98  0.99 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.06  −0.04  −0.16^{*}  −0.16^{*}  −0.07^{*}  −0.06  −0.12  −0.04  −0.08 
SSIW  −0.05  −0.02  −0.15^{*}  −0.16^{*}  −0.08^{*}  −0.06  −0.12  −0.06  −0.10 
SSNGIW  −0.03  −0.03  −0.15^{*}  −0.15^{*}  −0.06^{*}  −0.06  −0.10  −0.06  −0.06 
MinnCSV  −0.36  −0.27  −0.36  −0.35^{*}  −0.32  −0.28  −0.30  −0.23  −0.23 
SSCSV  −0.37  −0.25  −0.35  −0.34^{*}  −0.33  −0.30  −0.33  −0.27  −0.27 
SSNGCSV  −0.38^{*}  −0.27  −0.36  −0.35^{*}  −0.34  −0.31  −0.33  −0.27  −0.26 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.92^{*}  0.98  0.95^{*}  0.95^{*}  0.98^{*}  0.99  0.97  0.99  0.99 
SSIW  0.91^{*}  0.98  0.95^{*}  0.95^{*}  0.98  0.99  0.97  0.99  0.98 
SSNGIW  0.92^{*}  0.98  0.96^{*}  0.95^{*}  0.98^{*}  0.99  0.97  0.99  0.99 
MinnCSV  0.91^{**}  0.97  0.94^{*}  0.94^{*}  0.98^{*}  0.98^{*}  0.98  1.01  1.01 
SSCSV  0.91^{**}  0.97  0.95^{*}  0.94^{*}  0.98  0.99^{*}  0.98  1.00  1.00 
SSNGCSV  0.91^{**}  0.97  0.94^{*}  0.94^{*}  0.98^{*}  0.98^{*}  0.98  1.00  1.00^{*} 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.13^{**}  −0.24^{**}  −0.21^{*}  −0.18^{*}  −0.18  −0.18  −0.15  −0.18 
SSIW  −0.12^{**}  −0.24^{**}  −0.22^{*}  −0.20^{*}  −0.20  −0.21  −0.19  −0.23 
SSNGIW  −0.14^{**}  −0.24^{**}  −0.21^{*}  −0.19^{*}  −0.18  −0.20  −0.15  −0.18 
MinnCSV  −0.39^{**}  −0.41^{**}  −0.40^{*}  −0.36^{*}  −0.31  −0.31  −0.27  −0.29 
SSCSV  −0.38^{**}  −0.41^{**}  −0.40^{*}  −0.35^{*}  −0.31  −0.31  −0.27  −0.30 
SSNGCSV  −0.39^{**}  −0.42^{**}  −0.40^{*}  −0.35^{*}  −0.31  −0.31  −0.26  −0.29 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.93^{*}  0.88^{*}  0.91^{*}  0.92  0.93  0.93  0.95  0.94 
SSIW  0.93^{*}  0.88^{*}  0.90^{*}  0.91^{*}  0.92  0.92  0.94  0.92 
SSNGIW  0.93^{*}  0.88^{*}  0.90^{*}  0.92^{*}  0.93  0.93  0.95  0.94 
MinnCSV  0.89^{**}  0.87^{*}  0.88^{*}  0.90^{*}  0.91  0.91  0.93  0.93 
SSCSV  0.90^{**}  0.87^{*}  0.88^{*}  0.90^{*}  0.91  0.91  0.93  0.92 
SSNGCSV  0.89^{**}  0.87^{*}  0.88^{*}  0.90^{*}  0.91  0.91  0.93  0.93 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.05^{*}  −0.03  −0.00  0.00  0.00  −0.00  0.00  0.01  0.00 
SSIW  −0.07^{*}  −0.04  −0.00  0.01  0.01  0.00  0.00  0.01  −0.00 
SSNGIW  −0.06^{*}  −0.04  −0.01  −0.00  0.00  0.00  0.00  0.00  −0.00 
MinnCSV  −0.23^{*}  −0.15^{*}  −0.04  0.03  0.08  0.12  0.15  0.18  0.19 
SSCSV  −0.23^{*}  −0.15^{*}  −0.05  0.04  0.09  0.13  0.16  0.19  0.20 
SSNGCSV  −0.22^{*}  −0.14^{*}  −0.04  0.03  0.09  0.13  0.17  0.19  0.21 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.98^{*}  0.98  1.01  1.01  1.00  1.01  1.01  1.01  1.00 
SSIW  0.96^{*}  0.97  1.01  1.01  1.01  1.01  1.00  1.00  0.99 
SSNGIW  0.98^{*}  0.98  1.00  1.00  1.00  1.01  1.00  1.00  0.99^{*} 
MinnCSV  1.01  0.97  1.03  0.99  1.00  1.00  1.00  1.02  1.01 
SSCSV  0.98  0.96  1.02  0.99  0.99  0.99  0.98  0.99  0.99 
SSNGCSV  1.01  0.97  1.02  0.99  1.00  0.99  1.00  1.01  1.00 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.10  −0.06^{*}  −0.03  −0.06  −0.07  −0.04  −0.05  −0.05 
SSIW  −0.09  −0.01  0.03  −0.01  −0.01  0.02  0.01  0.01 
SSNGIW  −0.10  −0.04^{*}  −0.01  −0.05  −0.06  −0.02  −0.03^{*}  −0.02 
MinnCSV  −0.20  −0.16  −0.11  −0.08  −0.07  −0.04  −0.05  −0.03 
SSCSV  −0.19  −0.15  −0.08  −0.06  −0.06  −0.02  −0.02  −0.00 
SSNGCSV  −0.21  −0.16  −0.11  −0.07  −0.08  −0.04  −0.03  −0.01 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.94  0.98  0.98  0.97  0.96  0.98  0.98  0.98 
SSIW  0.95  1.00  1.02  1.00  1.00  1.02  1.01  1.01 
SSNGIW  0.94  0.99  0.99  0.98  0.97  0.99  0.99  0.99 
MinnCSV  0.96  0.98  0.99  0.97  0.97  0.99  0.98  0.98 
SSCSV  0.97  0.99  1.01  0.99  0.99  1.00  1.00  0.99 
SSNGCSV  0.96  0.98  0.99  0.98  0.97  0.99  0.99  0.99 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  −0.82^{**}  −0.36^{**}  −0.18^{**}  −0.10  −0.08  −0.08  −0.08  −0.08 
SSIW  −0.83^{**}  −0.37^{**}  −0.19^{**}  −0.11^{*}  −0.08  −0.08  −0.06  −0.05 
SSNGIW  −0.82^{**}  −0.37^{**}  −0.19^{**}  −0.12^{*}  −0.09  −0.10  −0.09  −0.09 
MinnCSV  −1.06^{**}  −0.58^{**}  −0.33^{*}  −0.17  −0.05  0.01  0.06  0.08 
SSCSV  −1.05^{**}  −0.58^{**}  −0.33^{*}  −0.16  −0.04  0.03  0.08  0.10 
SSNGCSV  −1.05^{**}  −0.57^{*}  −0.33^{*}  −0.16  −0.04  0.02  0.07  0.09 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.65^{**}  0.84^{*}  0.94^{*}  0.98  0.99  0.99  0.98  0.98 
SSIW  0.64^{**}  0.83^{**}  0.92^{*}  0.96  0.97  0.98  0.99  0.99 
SSNGIW  0.65^{**}  0.83^{**}  0.92^{*}  0.96  0.97  0.97  0.97  0.97 
MinnCSV  0.63^{*}  0.82^{*}  0.92  1.00  1.04  1.06  1.08  1.09 
SSCSV  0.62^{**}  0.81^{*}  0.91  0.98  1.03  1.06  1.08  1.09 
SSNGCSV  0.63^{*}  0.82^{*}  0.92  0.99  1.04  1.06  1.08  1.09 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
Model  h = 0  h = 1  h = 2  h = 3  h = 4  h = 5  h = 6  h = 7  h = 8 

Relative LPDS (model in first column − benchmark)  
MinnIW  1.40  0.39  0.07  −0.11  −0.16  −0.23  −0.28  −0.30  −0.34 
SSIW  1.30  0.37  0.09  −0.10  −0.19  −0.27  −0.34  −0.38  −0.44 
SSNGIW  1.37  0.42  0.15  −0.03  −0.09  −0.17  −0.24  −0.26  −0.30 
MinnCSV  3.53  0.77  −0.08  −0.43  −0.62  −0.76  −0.88  −0.93  −1.00 
SSCSV  3.39  0.70  −0.12  −0.46  −0.68  −0.84  −0.98  −1.06  −1.14 
SSNGCSV  3.48  0.74  −0.12  −0.47  −0.68  −0.84  −0.98  −1.05  −1.13 
Relative RMSE (model in first column/benchmark)  
MinnIW  0.96^{**}  0.95^{*}  0.94^{*}  0.93^{*}  0.93^{*}  0.94  0.94  0.95  0.95 
SSIW  0.95^{**}  0.95^{*}  0.93^{*}  0.92^{*}  0.92^{*}  0.92  0.92  0.92  0.93 
SSNGIW  0.95^{**}  0.95^{*}  0.94^{*}  0.94^{*}  0.94^{*}  0.94  0.94  0.95  0.95 
MinnCSV  0.98^{*}  0.98  0.97  0.97  0.98  0.99^{*}  0.99^{*}  1.00  1.01 
SSCSV  0.97^{**}  0.97^{*}  0.96^{*}  0.96  0.97  0.97  0.97  0.98  0.98^{*} 
SSNGCSV  0.97^{*}  0.97^{*}  0.96^{*}  0.96  0.97  0.97  0.97  0.98  0.99^{*} 
Note: The forecast horizon h denotes months. Negative LPDS entries indicate that the mixedfrequency model is superior in terms of density forecasting and values of the RMSE below 1 indicate better point forecasts. Bold entries show the minimum per column. The benchmark model is a VAR(12) including the 10 monthly variables using the steadystate prior with a constant error covariance matrix. Two stars (^{**}) indicate that the DieboldMariano test of equal predictive ability is significant at the 1 percent level, whereas a single star (^{*}) indicates significance at the 10 percent level. The test employs the modifications proposed by Harvey et al. (1997).
References
Adolfson, M., J. Lindé, and M. Villani. 2007. “Forecasting Performance of an Open Economy DSGE Model.” Econometric Reviews 26 (2–4): 289–328, https://doi.org/10.1080/07474930701220543. Search in Google Scholar
Ankargren, S., M. Bjellerup, and H. Shahnazarian. 2017. “The Importance of the Financial System for the Real Economy.” Empirical Economics 53 (4): 1553–86. Search in Google Scholar
Ankargren, S., and P. Jonéus. 2020. “Simulation Smoothing for Nowcasting with Large MixedFrequency VARs.” Econometrics and Statistics, Advance Online Publication, https://doi.org/10.1016/j.ecosta.2020.05.007. Search in Google Scholar
Baffigi, A., R. Golinelli, and G. Parigi. 2004. “Bridge Models to Forecast the Euro Area GDP.” International Journal of Forecasting 20 (3): 447–60, https://doi.org/10.1016/S01692070(03)000670. Search in Google Scholar
Bańbura, M., D. Giannone, and L. Reichlin. 2010. “Large Bayesian Vector Auto Regressions.” Journal of Applied Econometrics 25 (1): 71–92, https://doi.org/10.1002/jae.1137. Search in Google Scholar
Bańbura, M., D. Giannone, and L. Reichlin. 2011. “Nowcasting.” In The Oxford Handbook of Economic Forecasting, edited by M. P. Clements, and D. F. Hendry, Oxford University Press, chapter 8. Search in Google Scholar
Carriero, A., T. E. Clark, and M. Marcellino. 2015. “Realtime Nowcasting with a Bayesian Mixed Frequency Model with Stochastic Volatility.” Journal of the Royal Statistical Society. Series A: Statistics in Society 178 (4): 837–62, https://doi.org/10.1111/rssa.12092. Search in Google Scholar
Carriero, A., T. E. Clark, and M. Marcellino. 2016. “Common Drifting Volatility in Large Bayesian VARs.” Journal of Business & Economic Statistics 34 (3): 375–90, https://doi.org/10.1080/07350015.2015.1040116. Search in Google Scholar
Carriero, A., T. E. Clark, and M. Marcellino. 2019. “Large Bayesian Vector Autoregressions with Stochastic Volatility and NonConjugate Priors.” Journal of Econometrics 212 (1): 137–54, https://doi.org/10.1016/j.jeconom.2019.04.024. Search in Google Scholar
Carter, C. K., and R. Kohn. 1994. “On Gibbs Sampling for State Space Models.” Biometrika 81 (3): 541–53, https://doi.org/10.1093/biomet/81.3.541. Search in Google Scholar
Cimadomo, J., and A. D’Agostino. 2016. “Combining Time Variation and Mixed Frequencies: An Analysis of Government Spending Multipliers in Italy.” Journal of Applied Econometrics 31 (7): 1276–90, https://doi.org/10.1002/jae.2489. Search in Google Scholar
Clark, T. E. 2011. “RealTime Density Forecasts From Bayesian Vector Autoregressions with Stochastic Volatility.” Journal of Business & Economic Statistics 29 (3): 327–41, https://doi.org/10.1198/jbes.2010.09248. Search in Google Scholar
Clark, T. E., and F. Ravazzolo. 2015. “Macroeconomic Forecasting Performance Under Alternative Specifications of TimeVarying Volatility.” Journal of Applied Econometrics 30 (4): 551–75, https://doi.org/10.1002/jae.2379. Search in Google Scholar
Cogley, T., and T. J. Sargent. 2005. “Drifts and Volatilities: Monetary Policies and Outcomes in the Post WWII US.” Review of Economic Dynamics 8 (2): 262–302, https://doi.org/10.1016/j.red.2004.10.009. Search in Google Scholar
D’Agostino, A., L. Gambetti, and D. Giannone. 2013. “Macroeconomic Forecasting and Structural Change.” Journal of Applied Econometrics 28 (1): 82–101, https://doi.org/10.1002/jae.1257. Search in Google Scholar
Del Negro, M., and G. E. Primiceri. 2015. “Time Varying Structural Vector Autoregressions and Monetary Policy: A Corrigendum.” The Review of Economic Studies 82 (4): 1342–5, https://doi.org/10.1093/restud/rdv024. Search in Google Scholar
Del Negro, M., and F. Schorfheide. 2011. “Bayesian Macroeconometrics.” In The Oxford Handbook of Bayesian Econometrics, edited by J. Geweke, G. Koop, and H. van Dijk, 293–389. Oxford: Oxford University Press. Search in Google Scholar
Dieppe, A., R. Legrand, and B. van Roye. 2016. The BEAR Toolbox, Working Paper No. 1934, Frankfurt, Germany: European Central Bank. Search in Google Scholar
Durbin, J., and S. J. Koopman. 2012. Time Series Analysis by State Space Methods, 2nd edn. Oxford, UK: Oxford University Press. Search in Google Scholar
Eraker, B., C. W. Chiu, A. T. Foerster, T. B. Kim, and H. D. Seoane. 2015. “Bayesian Mixed Frequency VARs.” Journal of Financial Econometrics 13 (3): 698–721, https://doi.org/10.1093/jjfinec/nbu027. Search in Google Scholar
Foroni, C., and M. Marcellino. 2013. A Survey of Econometric Methods for MixedFrequency Data, Working Paper No. 6, Oslo, Norway: Norges Bank. Search in Google Scholar
Gelfand, A. E., and A. F. M. Smith. 1990. “SamplingBased Approaches to Calculating Marginal Densities.” Journal of the American Statistical Association 85 (410): 398–409, https://doi.org/10.1080/01621459.1990.10476213. Search in Google Scholar
Ghysels, E. 2016. “Macroeconomics and The Reality of Mixed Frequency Data.” Journal of Econometrics 193 (2): 294–314, https://doi.org/10.1016/j.jeconom.2016.04.008. Search in Google Scholar
Ghysels, E., A. Sinko, and R. Valkanov. 2007. “MIDAS Regressions: Further Results and New Directions.” Econometric Reviews 26 (1): 53–90, https://doi.org/10.1080/07474930600972467. Search in Google Scholar
Giannone, D., M. Lenza, and G. E. Primiceri. 2015. “Prior Selection for Vector Autoregressions.” The Review of Economics and Statistics 97 (2): 436–51, https://doi.org/10.1162/REST_a_00483. Search in Google Scholar
Giannone, D., M. Lenza, and G. E. Primiceri. 2019. “Priors for the Long Run.” Journal of the American Statistical Association 114 (526): 565–80, https://doi.org/10.1080/01621459.2018.1483826. Search in Google Scholar
Giannone, D., L. Reichlin, and D. Small. 2008. “Nowcasting: The RealTime Informational Content of Macroeconomic Data.” Journal of Monetary Economics 55 (4): 665–76, https://doi.org/10.1016/j.jmoneco.2008.05.010. Search in Google Scholar
Götz, T. B., and K. Hauzenberger. 2018. Large MixedFrequency VARs with A Parsimonious TimeVarying Parameter Structure, Discussion Paper No. 40. Frankfurt, Germany: Deutsche Bundesbank. Search in Google Scholar
Griffin, J. E., and P. J. Brown. 2010. “Inference with NormalGamma Prior Distributions in Regression Problems.” Bayesian Analysis 5 (1): 171–88. Search in Google Scholar
Harvey, A. C., and R. G. Pierse. 1984. “Estimating Missing Observations in Economic Time Series.” Journal of the American Statistical Association 79 (385): 125–31, https://doi.org/10.1080/01621459.1984.10477074. Search in Google Scholar
Harvey, D., S. Leybourne, and P. Newbold. 1997. “Testing the Equality of Prediction Mean Squared Errors.” International Journal of Forecasting 13 (2): 281–91, https://doi.org/10.1016/S01692070(96)007194. Search in Google Scholar
Huber, F., and M. Feldkircher. 2019. “Adaptive Shrinkage in Bayesian Vector Autoregressive Models.” Journal of Business & Economic Statistics 37 (1): 27–39, https://doi.org/10.1080/07350015.2016.1256217. Search in Google Scholar
Iversen, J., S. Laséen, H. Lundvall, and U. Söderström. 2016. RealTime Forecasting for Monetary Policy Analysis: The Case of Sveriges Riksbank, Working Paper No. 318. Stockholm, Sweden: Sveriges Riksbank. Search in Google Scholar
Kadiyala, K. R., and S. Karlsson. 1993. “Forecasting with Generalized Bayesian Vector Auto Regressions.” Journal of Forecasting 12 (3–4): 365–78, https://doi.org/10.1002/for.3980120314. Search in Google Scholar
Kadiyala, K. R., and S. Karlsson. 1997. “Numerical Methods for Estimation and Inference in Bayesian VARModels.” Journal of Applied Econometrics 12 (2): 99–132, https://doi.org/10.1002/(SICI)10991255(199703)12:2%3C99::AIDJAE429%3E3.0.CO;2A. Search in Google Scholar
Karlsson, S. 2013. “Forecasting with Bayesian Vector Autoregression.” In Handbook of Economic Forecasting, Vol. 2, edited by G. Elliott, and A. Timmermann, 791–897. Elsevier B.V., chapter 15. Search in Google Scholar
Kastner, G. 2016. “Dealing with Stochastic Volatility in Time Series Using the R Package Stochvol.” Journal of Statistical Software 69 (5): 1–30. Search in Google Scholar
Kastner, G., and S. FrühwirthSchnatter. 2014. “AncillaritySufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models.” Computational Statistics and Data Analysis 76: 408–23, https://doi.org/10.1016/j.csda.2013.01.002. Search in Google Scholar
Kim, S., N. Shephard, and S. Chib. 1998. “Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models.” The Review of Economic Studies 65 (3): 361–93, https://doi.org/10.1111/1467937X.00050. Search in Google Scholar
Koop, G. M. 2013. “Forecasting with Medium and Large Bayesian VARs.” Journal of Applied Econometrics 28 (2): 177–203, https://doi.org/10.1002/jae.1270. Search in Google Scholar
Kuzin, V., M. Marcellino, and C. Schumacher. 2011. “MIDAS vs. MixedFrequency VAR: Nowcasting GDP in the Euro Area.” International Journal of Forecasting 27 (2): 529–42, https://doi.org/10.1016/j.ijforecast.2010.02.006. Search in Google Scholar
Litterman, R. B. 1986. “A Statistical Approach to Economic Forecasting.” Journal of Business & Economic Statistics 4 (1): 1–4, https://doi.org/10.1080/07350015.1986.10509485. Search in Google Scholar
Louzis, D. P. 2019. “SteadyState Modeling and Macroeconomic Forecasting Quality.” Journal of Applied Econometrics 34 (2): 285–314, https://doi.org/10.1002/jae.2657. Search in Google Scholar
Mariano, R. S., and Y. Murasawa. 2003. “A New Coincident Index of Business Cycles Based on Monthly and Quarterly Series.” Journal of Applied Econometrics 18 (4): 427–43, https://doi.org/10.1002/jae.695. Search in Google Scholar
Mariano, R. S., and Y. Murasawa. 2010. “A Coincident Index, Common Factors, and Monthly Real GDP.” Oxford Bulletin of Economics and Statistics 72 (1): 27–46, https://doi.org/10.1111/j.14680084.2009.00567.x. Search in Google Scholar
McCausland, W. J., S. Miller, and D. Pelletier. 2011. “Simulation Smoothing for StateSpace Models: A Computational Efficiency Analysis.” Computational Statistics & Data Analysis 55 (1): 199–212, https://doi.org/10.1016/j.csda.2010.07.009. Search in Google Scholar
McCracken, M. W., and S. Ng. 2016. “FREDMD: A Monthly Database for Macroeconomic Research.” Journal of Business & Economic Statistics 34 (4): 574–89, https://doi.org/10.1080/07350015.2015.1086655. Search in Google Scholar
Omori, Y., S. Chib, N. Shephard, and J. Nakajima. 2007. “Stochastic Volatility with Leverage: Fast and Efficient Likelihood Inference.” Journal of Econometrics 140 (2): 425–49, https://doi.org/10.1016/j.jeconom.2006.07.008. Search in Google Scholar
Österholm, P. 2008. “Can Forecasting Performance be Improved by Considering the Steady State? An Application to Swedish Inflation and Interest Rate.” Journal of Forecasting 27 (1): 41–51, https://doi.org/10.1002/for.1041. Search in Google Scholar
Österholm, P. 2012. “The Limited Usefulness of Macroeconomic Bayesian VARs When Estimating the Probability of a US Recession.” Journal of Macroeconomics 34 (1): 76–86, https://doi.org/10.1016/j.jmacro.2011.10.002. Search in Google Scholar
Primiceri, G. E. 2005. “Time Varying Structural Vector Autoregressions and Monetary Policy.” The Review of Economic Studies 72 (3): 821–52, https://doi.org/10.1111/j.1467937X.2005.00353.x. Search in Google Scholar
Roberts, G. O., and J. S. Rosenthal. 2009. “Examples of Adaptive MCMC.” Journal of Computational and Graphical Statistics 18 (2): 349–67, https://doi.org/10.1198/jcgs.2009.06134. Search in Google Scholar
Rodriguez, A., and G. Puggioni. 2010. “Mixed Frequency Models: Bayesian Approaches to Estimation and Prediction.” International Journal of Forecasting 26 (2): 293–311, https://doi.org/10.1016/j.ijforecast.2010.01.009. Search in Google Scholar
Romer, C. D., and D. H. Romer. 2000. “Federal Reserve Information and The Behavior of Interest Rates.” American Economic Review 90 (3): 429–57. Search in Google Scholar
Sax, C., and D. Eddelbuettel. 2018. “Seasonal Adjustment by X13ARIMASEATS in R.” Journal of Statistical Software 87 (11): 1–17, https://doi.org/10.18637/jss.v062.i02. Search in Google Scholar
Schorfheide, F., and D. Song. 2015. “RealTime Forecasting with a MixedFrequency VAR.” Journal of Business & Economic Statistics 33 (3): 366–80, https://doi.org/10.1080/07350015.2014.954707. Search in Google Scholar
Stock, J. H., and M. W. Watson. 2001. “Vector Autoregressions.” Journal of Economic Perspectives 15 (4): 101–15. Search in Google Scholar
Villani, M. 2009. “Steady State Priors for Vector Autoregressions.” Journal of Applied Econometrics 24 (4): 630–50, https://doi.org/10.1002/jae.1065. Search in Google Scholar
© 2020 Sebastian Ankargren et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.