Laplace approximation vs. Harmonic Mean Estimator

Dear All,
let’s say that I want to compare the marginal data denstiy of two models (A and B). Furthermore, let’s disregard for a moment the problem that the set of estimated parameters differs across the two models (let’s say we are confident that we solved it via the training sample approach). What do I do if under the Laplace approximation, model A has the higher marginal data density, while under the HME, model B wins? Is one of the two methods clearly superior?
Many thanks for your help!
Best,
Ansgar

Please see [The log marginal data density in mode_computer=6)

Hi Johannes,
thank you for your quick response. I had seen the footnote in Smets and Wouters (2007), and in our estimation we also find that the laplace approximation and the HME yield results which are very close. However, in our application they still yield different answers to the question which model is preferred by the data, so it would be interesting to know whether any method is considered as better. Smets and Wouters (2007) cite the computational advantage of the laplace approximation, but don’t really say which method is more accurate. Would you say that none of the methods is necessarily better?
Best,
Ansgar

That is hard to tell. The Laplace approximation relies on the posterior being asymptotically normal. With bounded parameters that approximation can be quite poor if the mode is close to the bounds. In this case, I expect the modified harmonic mean estimator to perform better. In contrast, the modified harmonic mean estimator relies on correct sampling from the posterior and may have quite poor convergence properties. You might want to check the prior-posterior plots and the trace_plots for any clues for your application.

Hi Johannes,
thank you, that is very helpful!
Best,
Ansgar