I am using Bayesian estimation. The system produces oo file from where I can get all the smoothed series. I compute the std deviations of my relevant endogenous variables. I compare this with the std deviations found in theoretical moments in dynare output. I see a major scale difference The std dev based on smoothed series come close to the data not the theoretical moments. Why is that?
How big are the differences exactly? The typical difference comes from the theoretical moments being based on uncorrelated shocks. The smoothed series are based on estimated shocks that may have significant correlation in your finite sample, giving rise to significant differences in moments. Having highly correlated smoothed shocks is often seen as a sign for misspecification of your model as without correlation it is not able to fit the data.
The difference is quite substantial. What is puzzling is that if I use a variable as observable, I expect the theoretical std deviation should be very close to the the observable. But it does not happen. For example, I use hp filtered GDP as observable with std deviation .01 but the theoretical moment of this variables is .69.
If I understand you correctly, the difference between theoretical moments and moments based on smoothed series can be minimised if the smoothed shocks are iid which means it passes Q test. That is a tall order. Isn’t it? One then needs to try all combination of observable until all smoothed shocks become white noise. Am I interpreting you correctly?
What you report is a bit strange. A factor of 70 should not happen.
Small correlation within a sample is quite normal, but big correlations are an issue. After all, you assume the shocks are iid during estimation.
Sidenote: you are not supposed to use a two-sided HP-filter for Bayesian estimation. Please take a look at Pfeifer(2013): “A Guide to Specifying Observation Equations for the Estimation of DSGE Models”