Backing out shocks

Hi all,
I am wondering how exactly the Bayesian procedure in dynare produces smoothed shocks. An alternative procedure is to input the observed state variables in the model and back out the shocks. This procedure is followed by Kevin Lansing in several of his papers and also by Nolan and Thoenissen in their JME paper. The advantage of this alternative approach is that the model perfectly replicates the data by default. On the other hand, in dynare Bayesian algorithm, only the observable used are perfectly matched. Other endogenous variables are also simulated but they seldom match well with the actual data. For example, if you have four shocks, you can run the Bayesian routine with at most four observable. Dynare will perfectly match these four observable but besides these, the prediction is usually quite poor. The alternative procedure of Lansing can match more variables perfectly.

My questions are:

  1. Are these two procedures equivalent?
  2. Is there any way I can use dynare to back out shocks without going through Bayesian procedure and implement Lansing. It looks like Lansing prcuedure works smoothly if the model is simple enough to back out the shocks analytically.
    Thanks in advance for any feedback.
    PB

I am not sure I understand the question. Dynare uses the Kalman smoother. In the class of linear Gaussian models, that is the best you can do (full information approach). See also Kalman filter - Wikipedia

The approach you have in mind uses a least square criterion that is different from the full information critierion. So obviously the two approaches are not equivalent.

It is also not true that

The general rule is that N shocks can only perfectly explain N variables unless some additional variables are perfectly related to the N variables explained (e.g. are a linear combination).

When you read the Benk et al. paper, you will see that they try to match 5 variables with 3 shocks, i.e. the system is overidentified. They then use OLS to find the shock series that minimizes the unexplained variation in the series targeted. That is, in general none of the observed series will be perfectly matched. Only if you had 5 shocks could you hope to perfectly explain the data.

Now, you could probably easily implement this formula using the state space matrices returned by Dynare. Alternatively, you can run the Kalman smoother on your calibrated model using the calib_smoother-command.

Thank you for this very insightful response. It seems that I can always match a subset of variables which I am interested in by adding sufficient number of shocks and choosing the observable cleverly which I like to predict, The model then looks data congruent. All models can fit the data well. I can then perfectly match equity premium, risk premia which a model always has difficulty in predicting.
It sounds like a free lunch to me. How do you then exactly do a rigorous model validation? I must be missing something here.

It’s not a free lunch. You need overidentifying restrictions. It’s like a regression. If you use as many linearly independent variables as data points, you can get a perfect model fit. What you usually would do is check the out-of-sample forecast performance. For DSGE models, you can also check whether the second moments, which were not explicitly targeted, make sense.

Yes thanks again for enlightening me on this. I also came to know about this calib_smoother option from your email. I looked at the reference manual and sort of understand how it works. Would it be possible to send me an example programme where I can see exactly how it is coded? I could not find in the manual such an example code where calib_smoother is used. It looks like a very useful option. Thanks again for your help.
PB

See

Many thanks again. This is super useful.