The convergence problem

Hi everyone

When the model has the convergence problem, it implies one of the following reasons

  1. The number of interation is not sufficient
  2. or the mode that we find at the first step, is not truly global mode .( finding the truly global is very difficult, especially for big model with more than 30 unknown parameters)

Thus, if we have the convergence problem, my solution is often
solution 1: increasing number of interation for each MH Chain
Solution 2: or checking the mode which we find in the first step before lauching the MH algorithm
My solution for convergence problem is ok? If not, please correct me

I have 3 questions:

  1. Beside the insufficient number of interations and not truly global mode, are there other reasons which cause the convergence problem?

  2. What are solutions for the convergence problem?

  3. Any strategy or experience to find the ‘‘truly global’’ mode before lauching the MH algorithm?
    (Dynare offers many mode-finding rountines, which one is better?)

Thank you so much guys

Dear Peter,

The initial condition not being a global mode is not really a problem for the MCMC. It’s only a problem if you wish to report the mode of the posterior distribution (because your loss function is 0-1 and not quadratic, in which case the optimal point estimate, i.e. the point estimate minimizing the posterior expected loss, is the posterior mean). The main requirement for the initialization of the Metropolis-Hastings algorithm is that you start from points (plural because you may have more than one chain) where the density is positive. Dynare chooses randomly the initial conditions by sampling in a Gaussian distribution around your estimate of the posterior mode, and checks that the density is positive (or not too low compared to the density at the estimated posterior mode).

Again, In your case you have to increase the number of iterations. A possible reason for having issues with the convergence, is that the unknown posterior distribution is multimodal. In this case you will need much more iterations. The more the modes are separated, in the sense that the density between the different modes is very small, the more you will need iterations. Because it will be harder for all the chains to visit all the high density regions. In the limit case where you have regions with zero density between the modes, the Metropolis Hastings cannot converge.

Multimodality of the posterior mode is intrinsically a property of the model and the priors. A poorly chosen prior (with information at odds with the sample information) may result in multimodality.

Best,
Stéphane.

Dear Stephane

Thank you so much for your deep explanation

My understanding is that if my model has the convergence problem, it may be caused by the unknown posteror distribution with multimodal.
So, to overcome this problem, I should increase the number of interation for each MH chain.

However, when I increase as much as possible the number of interation for each MH Chain, my model still can not converge. It may imply that I have multimodality, in which I have regions with zero density between the modes. Thus, the Metropolis Hastings cannot converge.
To overcome this problem, instead of increasing number of interations for each MH Chain, now I should improve a poorly chosen prior, since a poorly chose prior may cause multimodality.
Is that correct? otherwise, please correct me @stepan-a
Thank you so much for that

Yes. The first thing is to substantially increase the number of iterations and see what happens. You can also give a try to other MCMC algorithm (see option posterior_sampling_method in the reference manual).

Best,
Stéphane.

Dear @stepan-a

Thank you so much for your useful advices

With best regards