Reporting results


Could you please advise on reporting the results. After the bayesian estimation is complete:

  1. Report mean or mode of parameter estimates?
  2. For calibration using parameter estimates, better to use mean or mode?
  3. Similarly, for reporting filtered variables, which statistic to use?


Have a look at


Thanks Johannes

Could I ask another question related to mode finding using optimisers?

In certain instances, when I try to find a mode using optimiser (say mode_compute=1), the resulting matrix is negative definite, but the plots generated by mode_check look fine. Now, when I use the parameters found by mode_compute=1 as starting values and restart mode search with mode_compute=6, I get a positive definite matrix, with some minor decimal changes to the initial parameters. Would you happen to have a guess as to why this is?

Without seeing the specific case it’s impossible to tell. But the way mode_compute=6 computes the covariance matrix is very different from other optimizers. The latter try to compute the inverse Hessian, the former directly uses the covariance of draws from an MCMC.

Thanks Johannes

Could I ask another question. I just finished running Bayesian estimation: 1 chain with 50mln draws, acceptance rate around 26% and stable midway through. Geweke test cannot reject the null of mean equality for any parameter.

Is it correct to claim in this case that the mode has been successfully estimated (regardless of initial conditions)?

Where are the results for the estimated mode stored (is it filename_mh_mode or somewhere else)? Is there an equivalent of 90% confidence band for the estimated mode in a similar way that is reported for the mean (and if yes where is it stored)?

Thank you and apologies if these type of questions have already been answered.

  1. You cannot make claims regarding the mode, only about the convergence of the chain to its ergodic distribution. This seems to be satisfied.
  2. You are doing Bayesian estimation, while confidence bands are a frequentist concept. Typically, people (and Dynare) report the HPDIs.

Thank you.


Could I ask a few questions? First, why is the sign of “fvalue” in the mode file always of the opposite sign to that of mh_mode (or it is just with my estimation)? Second, with uniform priors, is it correct to say that the variable is “insignificant”, if the estimated coefficient is pushed to zero?

Many thanks

  1. One is the posterior value, the other the output of the optimizer. Because the latter are minimizers, we are minimizing minus the posterior.
  2. With uniform priors, you are essentially doing maximum likelihood. Variables cannot be insignificant, only estimated parameters. Whether something is “insignificant” depends on the null hypothesis you want to test. If the test is for 0, then the HPDI containing zero can be interpreted as insignificance.

Thanks Johannes. Could I ask another question? Say the parameter on (0,1) interval is theoretically identified, but the identification is weak. Beta prior results in posterior that fully coincides with Beta. Is it correct to go for uniform prior in this case (it peaks at the border value though). Thanks again.

If the identification is so weak that the prior and the posterior coincide, I would always go for an informative prior.


Dear Johannes

Could I clarify another point with Dynare? After Bayesian estimation is complete, a set of figures is produced (irf, smoothed series etc). Are these figures based on the estimated mode after the estimation is finished running or on the initial mode found by optimization?

Neither. Bayesian objects are moments across parameter draws, e.g. the posterior mean is the mean across draws.