Correlation among DSGE shocks

Hi Johannes,

How should users of Dynare’s DSGE parameter estimation/calibration procedure think about disciplining the shocks that they are using to identify the observable time series that are introduced? For example, in your recent news shocks paper, a referee may have expressed reluctance to your introducing of all these shocks and that an unknown correlation among them in the data might not be expressed in the model, so the parameter calibration would be wrong. While for a few of the shock processes, you imposed some structure, what about the others? Introducing measurement error to identify a shock is an alternative, but unless there is good prior evidence that a particular time series is poorly measured, it seems incorrect to throw it in and just attribute variation in the data to it.

A related question is that, in my paper, my model matches a large extent of the historical time series, but it is much more volatile. I reason this is because of the five shocks in my model – yet, then the DSGE estimation should reveal lower standard errors for my shocks, right?

Thanks again!

A few thoughts first: You really have to clearly distinguish between calibration and estimation. Typical calibration is not affected by the problems you mention. Calibration aims at fixing some parameters to match stylized facts related to growth observations. Those long-run averages are not affected by stochastic shocks. For example, people often fix the discount factor to match the long-run real interest rate and the depreciation rate to match the capital to output ratio.

Now to you original question. What you seem to be asking is: why not estimate the full covariance matrix of shocks instead of just the diagonal? The typical reason is Occam’s Razor. Often the shocks considered are so structurally distinct that we don’t think they should be correlated. For example, what has technology in the form of TFP to do with preference shocks. Imposing a diagonal covariance matrix allows for a more parsimonious parameterization of the model (n instead of n*(n+1)/2 parameters and thus no curse of dimensionality). Of course, if you have reason to suspect there is correlation, you should allow for it or at least test robustness. In our “Fiscal News and Macroeconomic Volatility” paper, we allowed tax shocks to be correlated.

Regarding your problem of the estimated model being overly volatile: in Section 3.3 of our paper, it says:
"To avoid the common problem of the estimated model overpredicting the model variances, we follow Christiano et al. (2011) and use endogenous priors"
The problem is that estimation does not target moments but minimizes the forecast error.

Thanks for the Christiano et al reference and the part of your paper; I am also now reading a paper by Del Negro from the Fed on this point.

A follow up on the correlation of errors: I think it’s tough to argue that those structural shocks will not all be independent; for example, a financial shock affects investment and asset prices, as well as affecting the government’s overall budget constraint and incentive to reoptimize towards time inconsistent behavior. Nonetheless, as you suggested, testing for some correlation among them is a good check. Do you suggest introducing measurement error over a shock – especially if there’s no strong evidence that a particular variable is mis measured?

EDIT: Is it okay to provide time series for some observables for, say, 1950-2010, but for another at a later date, say 1990-2010? Does Dynare treat missing observations as zero?

Be careful. It seems you are confusing exogenous shocks and endogenous responses to those shocks. If your financial shock affects the tax base, this should be reflected in fiscal feedback, not in a different “exogenous” shock (that is now clearly endogenous). I am suggesting to check robustness by re-estimating the model using correlated shocks for the ones where you suspect correlation.

You can do so, but the extremely disparate samples might be a problem. And no, missing observations are not treated as zero, but as missing. Rather, the Kalman filter is used to treat the missing observations as an unobserved state and to infer its value.