You may have noticed that using the posterior mode optimization routine mode_compute=6 seems to produce different mode parameter estimates every time it is run.

Is anyone aware of what should be changed in Dynare in order to ensure that mode_compute=6 produces mode values which are replicable?

Thank you very much in advance for any feedback ot comment on this issue!

As mode_compute=6 is based on Monte Carlo methods and thus involves drawing random numbers, you would have to fix the seed of the random number generator in Matlab, I guess.

Thanks a lot for your reply. Do you know exactly where in the Dynare files one should fix the seed?
And more generally, is it not a huge drawback of mode_compute=6 if its results depend so crucially on the particular random nature of the Monte Carlo draws?

s = RandStream('mt19937ar','seed',1);
RandStream.setDefaultStream(s);

before the estimation command. This should set the seed to 1.

I am no expert on this issue, but I do not really see a need to fix the seed. The mode_compute=6 algorithm works like the Metropolis-Hastings algorithm used in Bayesian estimation. Everytime you run the Random Walk Metropolis-Hastings algorithm with not exactly the same seed you will get different results for your posterior as the draws from the prior distribution will differ. However, irregardless of the starting values and the seed of the random number generator, the estimation should converge to the true posterior with the correct mode. Hence, mode_compute=6 should give you approximately the same results every time you run it. If it does not, the estimation may not have converged to the mode yet.

Do you actually refer to the number of draws in the Metropolis-Hastings? I am asking because as far as I can see one is not really able to control “iterations” during the mode_compute=6 mode optimization stage. Correct me if I have misunderstood.

I tried your suggestion and my experience shows that this option controls the number of iterations in the “Looking for the posterior covariance” stage while “Climbing the hil” (which is supposed to be the optimization stage) seems unaffected. Is this in your opinion a step towards making the modes come closer to each other in different estimation rounds?

I am sorry George, I misunderstood you previous question. You are right, the options_.Opt6Numb= …; command controls the “Looking for the posterior covariance”. Anyway, to go back to your original problem, as it is explained also in DynareWiki, this is not an optimization procedure. This routine just helps you to find a good starting point for the MH algorithm (the one for the parameters posterior distributions) and a good covariance matrix for the jumping distribution. That’s it.
To be honest I have never checked if modes change every time you run the routine. I will try and I will let you know.

That will definitely be appreciated! What is most interesting for me is if mode_compute=6 can be forced to produce the same (or at least very similar) posterior modes which then are used to initialize the MH and how this can be achieved.

I tried and it gives exactly the same modes (up to the last decimal digit) for all the parameters.
Anyway, also the Sims algorithm is stochastic when it perturbs the search direction. So if you try with that you should encounter the same problem, namely ending up with different modes every time. If that is the case it may mean that your priors may be not that nice, because the set of parameters drawn along the iterations give to much cliffs (their values provide wither an indeterminacy or non–equilibrium solution). So you may want to try to redefine your priors in order to constraint the domain of your parameters more properly.