Bayesian estimation, marginal likelihood

Hello!

Could you, please, help me? I’m doing bayesian estimation of my model and I don’t completely understand the nature of some results. When I reduce the number of shocks to be estimated, I got the higher log data density (it becomes less negative). So, why when a a smaller set of shocks is feasible in estimation, the model with the bigger set of shocks gives worse results in terms of log data density?

Thank you in advance for your time.

With kind regards,
Alex

By construction, the marginal data density accounts for degrees of freedom used. If the shocks you drop from the model do not explain much, you lose in terms of fit, but the penalty for additional parameters estimated goes down. Think about this like an adjusted R^2.

Dear Professor Pfeifer,
thank you for your response!