After finding the posterior mode using mode_compute=6, I implement the RWMH MCMC algorithm.
In the options for the mode_compute=6, I had specified the target acceptance ratio at 20 percent:
Then on for the MCMC, I set the mh_jscale by using the saved optimal_mh_scale_parameter from the output of the Monte Carlo optimiser.
load([M_.fname '_optimal_mh_scale_parameter.mat'], 'Scale' );
Would you expect this strategy to yield an average acceptance ratio of 20% for the MCMC, just as we did for the Monte Carlo optimiser?
From what I can see, it does not!
So perhaps I have misunderstood what the Scale parameter save in the output of the Monte Carlo optimiser represents.
I am not sure I understand. When you write “I implement the RWMH MCMC” do you mean that you wrote your own algorithm? You should then not only take the optimal scale parameter from
mode_compute=6… You need also to use the estimate of the posterior covariance matrix. How far are you from the targeted ratio?
I do not post the log file, but it may be the case that you need to increase the number of iterations in
mode_compute=6 to get the acceptance ratio right.
Thanks Stephane. By ‘implementing the RWMH’, I meant just using the mode to start standard mcmc, after the mode_compute=6. Perhaps increasing the number of iterations will solve the problem. Currently, I am just using the default options.
options_.gmhmaxlik.target=0.2; % Target MCMC acceptance ratio
options_.gmhmaxlik.iterations = 3; % [default=3] to call repeatedly the new optimization routine. Improves the estimates of the posterior covariance matrix and of the posterior mode.
options_.gmhmaxlik.number = 20000 ; %[default=20000] to set the number of simulations used in estimation of the covariance matrix.
options_.gmhmaxlik.nscale = 200000 ; % [default=200000] to set the number of simulations used in the tuning of the scale parameter.
options_.gmhmaxlik.nclimb = 200000 ; % [default=200000] to set the number of simulations used in the hill climbing simulations.
You may try to increase the value of nscale. But I would need to see the log to be sure.
I am also using mode compute 6 with parallel command in estimating a model, and I am struggling to get a passable acceptance ratio. My estimation command is using these options,
mh_replic=2000000,mh_nblocks=4,mh_drop=0.45,mh_jscale=97.85, …, optim = (‘AcceptanceRateTarget’, 0.234),…;
My understanding is that this approach updates the jscale parameter, giving optimal_mh_scale_parameter, so as to improve towards the Target Ratio. However, the model rarely gets between 0.23 and 0.29 and to get estimates in this range, I am changing the mh_jscale.
From my understanding of the guide though, this values gets overwritten immediately, and so the estimation is independent of this option, even though by changing it I get changes in the acceptance ratio in accordance with theory (i.e. As I increase it, the ratio falls). This gives me 2 questions:
A) If it is updating, why does changing the mh_jscale seem to affect my results non-randomly?
B) How do/Can I change the options to better reach the target acceptance ratio?
Why are you using mode_compute=6 here? According to my experience, this is often the case when there are deeper underlying problems hidden in the model.
Other estimators don’t get a positive definite Cholesky. This sometimes happens with mode_compute=6 too, but only rarely.
As I said, that is typically an indicator for deeper problems. How do the
mode_check-plots look like?
Typically the two lines match each other for the majority of plots except for one, where shock variance and persistence parameter will have additional curvature to the posterior. In the estimation currently running, everything matches except a single standard error for one of the biased tech shocks (uploaded here hopefully)
TwoEducTaxCES_CheckPlots1.fig (128.6 KB)
TwoEducTaxCES_CheckPlots2.fig (116.9 KB)
No, the plots look really poor.
rhoi seems to be at a boundary and none of the “modes” is at the peak of the respective posterior slice.
Should I be adjusting the priors and/or estimation options, or is this indicative of another issue? My immediate thought would be to change the priors to start closer to indicated peaks
For bigger models, I use a sequence of Dynare optimisers mode_compute=1, 2, 4, 7, 8, 9 and then if it does not get me a +ve definite Hessian, I use mode_compute=6.
Before going brute-force like @punnoosejacob you should check whether everything in your estimation work like expected. Check the observation equations, the variable scaling, etc.
Ok, thank you both for your help