Model validation of Bayesian estimation

Hi,

I am a PhD student and I estimated a DSGE model. As a part of model validation I have included moment comparison of actual and theoretical moments. But my supervisor is suggesting that this kind of validation is suitable only for calibrated models. But, in a recent training on DSGE we did a similar exercise for Bayesian estimated models. Can anyone help me with this. Am I doing something wrong? Is there any better method to validate Bayesian estimated models?

Thanks in advance.

This is a rather philosophical matter. Historically, models were calibrated to long-run growth facts and then cross-validated by looking at the implies short- to medium-run implications for the business cycle, which is in a sense a different dataset.

When estimating a model, the parameters are chosen by looking at the same dataset for which you try to match second moments. You could argue that this is not a rigid “out of sample” test.

People nevertheless do this, because when estimating, you try to minimize the forecast error. Thus, it is not guaranteed that selected second moments are well-matched. Looking whether the model matches them is is a sensible test (not meant to denote a statistical test).

What you could do, is perform a test of the overidentifying restriction, see e.g. ideas.repec.org/a/eee/dyncon/v31y2007i8p2599-2636.html. This test will be a lot stricter than the eyeball econometrics performed on second moments. See also delong.typepad.com/sdj/2011/10/calibration-and-econometric-non-practice.html

If you do Bayesian estimation you should not be testing at all. Rather, you do model comparison and only reject your current model if you found a better one (the idea being that a poor model is still better than no model at all)

Dear Jpfeifer,

I am also curious about this issue. I found that in many papers, people always compare the theoretical moments with the empirical moment, such as variance and correlation, to validate their model is good. But if my understanding is correct, your statement is that we can only check the identification and do model comparison rather than do such kind of validation. Could you explain a little bit more in detail? Thanks!

My point was a rather philosophical one. Bayesian econometrics does not do classical testing. In Bayesian papers you will regularly find comparisons between models, but never a statistical test whether the model “fits” the data. In contrast, classical frequentist econometricians do testing of models. See the linked paper for an example.
Note also that I did not talk about “identification” in the sense of whether parameters are identified, but about classical tests of overidentifying restrictions. This is something different!

Comparing moments in the data and the model is still a valid check (it amounts to an eyeball test of the overidentifying restrictions in your model), but it is less strict than a correct calibration exercise where the validation takes place on a fully different domain.

Good evening

Please Mr J p feifer I have two questions and I hope that you answer me

  • Is it normal to have the log marginal density of the DSGE-vAR lower that the one of the dsge model?(usually it is higher)
    *In matlab when I compare between the DSGE and the DSGE-VAR model I find

Log data density is -424.477452.

and Log Marginal Density -431.269361
which one to select ? and thank you very much for answering me
Soulaima

For the data density, the higher the better. Thus, -424 is better than -431.

I am not aware of any result that states that the marginal data density of the DSGE-VAR must always be higher than the one of the DSGE-model. But this is not my field of expertise, so I may be wrong. But I read Del Negro/Schorfheide (2004) to state that they look for an interior maximum of the marginal data density by choosing an appropriate value for lambda. This suggests that there might be cases where the VAR does add nothing and the DSGE model is preferable.

Thank you jpfiefer I appreciate your answer just I want to precise something
I find :

ESTIMATION RESULTS

Log data density is -424.477452.

Model article2modif article2modifdsge
Priors 0.500000 0.500000
Log Marginal Density -431.269361 -451.641892
Bayes Ratio 1.000000 0.000000
Posterior Model Probability 1.000000 0.000000

In this case I know that I should select -431,26 so the model “article2modif” but what is the difference between “Log data density” in the estimation result and "Log Marginal Density " , it is not the same? in the article2modif should I select (Log data density is) -424.477452 or (Log Marginal Density ) -431.26936
Thank you again

You would need to post the full output. My guess is the two numbers are based on different approaches to compute the marginal data density. One should be based on the Geweke modified harmonic mean estimator, the other on the Laplace approximation. Fortunately, they give the same qualitative result.

1 Like