Model Comparison - Bayes Ratio

Dear all,

I perform a model comparison via the dynare command:

model_comparison (marginal_density=laplace) rob30_model1(0.5) rob30_model2(0.5);

Both models were estimated on slightly different datasets and rob30_model2 contains more estimated parameters. I obtain the following results:

Model Comparison (based on Laplace approximation)
Model rob30_model1 rob30_model2
Priors 0.500000 0.500000
Log Marginal Density 2677.598998 3733.456053
Bayes Ratio 1.000000 Inf
Posterior Model Probability 0.000000 1.000000

Based on this I have two questions:

  1. What’s is the reason that I end up with a Bayes Ratio of 1:0?

  2. In both models the prior distributions have been truncated for the estimation. So this would mean that the model comparison would be invalid anyway, correct?

Thanks for your help and sorry for the basic questions.

Best

Robert

There is a difference of about 50 log points in the marginal data density. That is huge. Model 2 is e^{50} times more likely than model 1. So it is not surprising that model 2 get all the probability if the two probabilities need to add up to 1.

Your comparison is invalid for two reasons:

  1. (Implicit) prior truncation if it differs between models
  2. The fact that the data is not the same across models.

Thanks a lot Johannes!

Dear all,

unfortunately, I have a similar problem. I have two loglinerazied models (Baseline model and a second model, which includes a habit parameter). When I compare both by the model comparison command, the output does not seem to be plausible (see attachment)
Model comparison (Baseline & ExtHabit).pdf (151.6 KB)

I would like to compare the models via posterior odds ratio, which is of course not possible, since the posterior model probability of one model is 0.
Additionally, even when comparing models under internal and external habit formation, the posterior odds ratio is very high (PO = 68.456432).
I have attached the datafile and Estimationfiles.

Could someone tell me, what the problem is and where I’m going wrong ?

Thanks a lot for your help!

Best,

Lisa

Datafile_Estimationfiles.zip (79.6 KB)

Why do you think there is a problem? The data just tells you that the model with habits is almost infinitely more likely to have generated it. There is a difference of 30 log points in the MDD.

Dear Prof. Pfeifer,

I thought that this value is too high to be plausible. Especially since the model under internal habit formation is 68 times more likely than the model under external habits, even though the two models themselves are very similar.
What would be a value range that would indicate that the models are almost equally likely and what value range would suggest that one model is definitely more likely compared to the other model?
After I have read several posts on this topic in the forum, it seems that it is common that the difference of the posterior model probability between to models is very large., even though the differences in models to be compared are small.

Thanks a lot for your help !

Best,

Lisa

Particularly for rather small models with few parameters and features, one additional feature can make a big difference. Normal models have a very hard time explaining why consumption is so smooth with normal risk aversion/EOIS values. Thus, I find your findings not very surprising.
Regarding interpretation of the Bayes factors:

1 Like

It became clear now, thanks for your help!