I’ve asked a question about the comparison of moments between simulated and empirical data in the post Question on bayesian estimation . You said that sometimes I have to live with the big difference between the two types of moments. If so, is there any other ways to verify the results of bayesian estimation? How should I convince that my results are acceptable?

That is a tough question. By construction, a model estimated with Bayesian techniques will perfectly fit your data. So you need some kind of “overidentifying restrictions” that allow you to evaluate your model’s fit outside of the actual data used. It’s not clear that the difference between actual and simulated moments amounts to failing such a test due to the issue of small sample correlations.
Most of the time, people simply don’t provide additional tests.

I’m sorry that I do not quite understand this “overidentifying restrictions”. What are these restrictions?

And also, when you say “people simply don’t provide additional tests”, do you mean that people just display the estimation results without doing any tests? In this case, how to evaluate the correctness of estimation results as different data series will lead to different results? And what to say if someone doubted the reliability of your estimation results?

are some restrictions that the data puts on your model and that the model can violate. If the model does not satisfy these restrictions, you can reject the model. For example, in GMM, if you have more moments than parameters, your model generally will not be able to match all moments. If the difference is too big, you can reject the model. That will not work with full information techniques, because the model always fully explains the data.

What do you mean with

? If the model is properly estimated, it will correctly match the data. How would you judge correctness (beyond the obvious technical diagnostics about estimation like mode_check etc).

I’m sorry if my problem is not well described. And also I’ve got some new questions. So I try to put it more specifically this time. Would you please give any advice? Thank you!

I use 7 observed data series to do the bayesian estimation in the model with 7 shocks. The code runs well and I get the estimated results. But when comparing the moments as usual, the simulated moments are much larger than the moments of empirical data. According to my understanding of your reply in the last post, this happened because the correlation of the data series forced correlated shocks in the model while simulations are based on uncorrelated shocks. Did I understand it correctly? Does it more likely to happen when the observed data series are short?

To solve this problem, I tried deleting one of the shocks once at a time (which means 6 shocks and 6 data series left) to check the estimation results. I found that the simulated moments are close to the moments of empirical data when the TFP shock is deleted. Is this normal?

Whatever, I don’t want to lose this TFP shock because it is the most basic one and so important in the model. Moreover, It seems that TFP shock is always included in a DSGE model. So I wonder if there are some way else to check the estimation results other than the moments comparison? Or I just have to live with it?

If I have to live with it, is there any method to evaluate the estimation results as different data sets could lead to different estimation results. How to decide which result is the best?

Yes, that should happen when the data series are short. If they are long and you still see correlations, that is a clear sign of misspecification as opposed to simply being due to chance in a short sample.

TFP shocks are essential in most models because they introduce the right comovement in the data. You often find that TFP shocks are “used” as the base shock and other shocks then systematically explain the movement of particular variables from that typical comovement.

Again, a moments comparison is not very informative in short samples as it cannot distinguish between short sample issues and systematic problems in the model.