How improve my estimate?

Hi everyone, Hi professor Pfeifer.

I have been estimating this model, which I cannot make the parameters converge in the MCMC, however I have managed to achieve log data densities of -450 approx, although now I don’t know what it was because I can’t get them, although all I have done is change the priors.

I have looked at the trace_plots and I see that many parameters tend to have trends. I have used up to 2,000,000 draws. What do you do in such cases? What do the experts do to deal with this?

  1. Suppose that a parameter (say alpha) in the trace_plot is trending and moving to x value. Should I change the prior distribution towards this value?
  2. I have calibrated some parameters, while others I estimate. Would it help me to remove (relax or fix them according to the literature) or add some parameters that were calibrated to the estimate?

I should mention that I am applying the model to another country and modifying it a bit, furthermore, the model was not estimated by Bayesian techniques, but by standard calibration. In the original there was no monetary policy.

I could attach mode_plots or trace_plots if necessary, or even .mod and .mat files to estimate it.

Thanks in advance.

What does the trace-plot for the posterior density say? Is the chain moving towards a higher posterior density? If yes, initial mode-finding was not successful. You should then restart mode-finding based on the _mh_mode-file.

Note that adjusting the prior based on posterior results is wrong. It will screw up the inference as the prior is supposed to capture your view before you see the data.

Sometimes when I have changed some priors there is a trend in the trace_plot of the posterior density, moving towards a higher one.
Another thing I didn’t mention, the mode finder only works fine with mode_compute = 6.
How do I reset the mode finder? Are you referring to using load_mh_file, keeping the mode_compute = 6, and re-estimating?

  1. If only mode_compute=6 works, then there is typically some deeper problem.
  2. No, don’t use load_mh_file. Use mode_file=...,mode_compute=x

I did see a post that you commented that if only mode_compute = 6 works there was a problem. The weird thing is that the model is exactly the same as the one in the paper, unlike these equations that I mention. Also, it is a medium-scale model, with many parameters and 11 shocks, I don’t know if this affects.
But still thank you very much, I will try with this to see if I can improve the estimate, have a good day.

mode_compute=6 will usually generate a positive definite Hessian. At the same time, it is a pretty inefficient optimizer. For that reason, it is often advisable to start with one of the other mode-finders and then run mode_compute=6 at the end.

Have you considered changing the posterior sampling method? ‘tailored_random_block_metropolis_hastings’ is much better at handling multi-modal posteriors

1 Like

I’ve been doing what you suggested and the model has improved. But I have a doubt taking advantage of the post, is there an acceptable range for the log data density? for example, if I have a log data density of -1200 is it a bad sign? I ask this in case they can reject my work in that regard. Thanks in advance.

My interpretation, which I’m pretty sure is basically correct, is that the log data density tells you the likelihood of seeing a data series given the model. The likelihood function gives you the likelihood of seeing a data series given a model and a specific parameter draw.

Log data density gives an ordinal ranking, so you need some other model to explain the same data series against which to compare it. An RBC model that is estimated using only US GDP will have a substantially higher marginal data density than, say, the Smets Wouters '07 model, even though we know that the Smets Wouters model is one of the best DSGE models as it was the first such model whose marginal data density was comparable to a reduced-form VAR, while RBC models are a known dumpster-fire in terms of explaining quarterly variation in the economy.

Since you mentioned in your opening post that your first model appeared to have a marginal data density of -450, then a model which has a marginal data density of -1200 would be very problematic: these are log probabilities, so your old model with -450 is infinitely more likely to generate the data than the new one with -1200.

1 Like

Thank you very much, your answer was very informative.

I’ve been doing what you suggested and the estimate has improved quite a bit except for one thing. Initially I used mh_jscale = 0.2, then when I restart the mode I get the following acceptance ratio 56.983% and 55.691%. Change the mh_jscale = 0.275 and only on restart and I get 37.642% and 36.805% and with stationary trace_plots . The question is:
Is it valid to start with mh_jscale = x and on restart use another mh_jscale to obtain a good acceptance ratio?
Thank you very much for your help

A couple of things:

  1. As @jthomp10 points out, the marginal data density is largely irrelevant when working with one given model. We care about the posterior density.
  2. The acceptance rate should be around 1/3 for the ergodic part of the chain that you are later using. Changing the mh_jscale once you have reached that part of the distribution is allowed. After all, you are discarding the earlier part of the chain due to non-convergence. What you are not allowed to do is adjust the mh_jscale during the chain’s run without it ever converging to a fixed value.

Just to verify, these are the steps I followed:

  1. Run model.mod estimation (…, mode_compute = 6, mh_jscale = 0.2). Results chain 1 = 36%, chain 2 = 35%. (trace_plots not stationary and MCMC does not converge)
  2. Copy model_mh_mode and paste in new folder
  3. Run model2.mod estimation (…, mode_compute = 6, mh_jscale = 0.2). Results chain 1 = 56%, chain 2 = 65%. (trace_plots looks good)
  4. Run model2.mod estimation (…, mode_compute = 6, mh_jscale = 0.3). Results chain 1 = 33%, chain 2 = 35%. (trace_plots looks good)

Is the procedure correct? Another question, can I do this an indefinite number of times?

That looks good to me.

1 Like