I have been trying to estimate a small scale NK model with deep habits using my own MATLAB codes. However, it seems to be that csminwel fails to find the correct mode. That’s why I am now trying to use Monte Carlo routine as in mode_compute = 6 to get the mean and covariance matrix for the jumping distribution in the MH algorithm. Toward that purpose I have tried to write up a MATLAB code for that in spirit of what mode_compute = 6 is doing. But for some reason it’s not working. Could you please provide me with the mode_compute = 6 code that DYNARE is using so that I can have a look at it for better understanding?
Moreover, could you briefly explain what’s the reasoning behind the updating scheme for mean and covariance matrix which DYNARE is using in mode_compute = 6 as stated here:
Please find attached the MATLAB code I have written in spirit to mode_compute = 6 as described in the above link.
Looking forward to your response.
modefindingMC.m (4.03 KB)
mode_compute=6 is basically a simulated annealing type routine. Instead of programming something like this yourself, I would recommend using any global optimizer freely available. Note also that any jumping covariance matrix works. You could just pick a scaled identity matrix to start your MCMC and once it has converged to a region of higher likelihood, start again with mode-finding. This is essentially what mode_compute=6 does.
If I understand this correctly, for the MH algorithm, we don’t need to start exactly at the mode but from a point with a high posterior density value and to use a good covariance matrix for the jumping distribution. Now what if I initially run a MH algorithm with the mean of the jumping distribution set to the prior mean and the variance-covariance matrix to scaled identity matrix. I then run this MH algorithm, say for 1000000 draws and compute the mean values of the last 500,000 draws (conditional on convergence). Next, I run my ‘proper’ MH algorithm, now with the starting value set to the mean I have just calculated from the previous step and the variance-covariance of the jumping distribution set to the inverse of the hessian computed at the calculated mean.
What’s your view about such a procedure? I am trying to do it this way because often times, as you know, Newton type algorithm like Chris Sim’s ‘csminwel’ gets stuck due to cliffs in the log posterior.
This is perfectly fine. You just have to treat the initial draws as a burnin. Theory tells you that, as long as the regularity conditions are met, any jumping covariance should work (but potentially only in close to infinite time/infinitely many draws). I would recommend using e.g. CMAES for mode finding first. It is freely available.
Thanks for your suggestion. Just to clarify one more thing - is it the case that ‘repeated’ application of CMAES is essentially similar to mode_compute 6 in dynare?
No, repeated application of a MCMC would correspond to mode_compute=6. CMAES is simply a global optimizer like simulated annealing. It tends to work pretty well.