Could you please advise on reporting the results. After the bayesian estimation is complete:
- Report mean or mode of parameter estimates?
- For calibration using parameter estimates, better to use mean or mode?
- Similarly, for reporting filtered variables, which statistic to use?
Could I ask another question related to mode finding using optimisers?
In certain instances, when I try to find a mode using optimiser (say mode_compute=1), the resulting matrix is negative definite, but the plots generated by mode_check look fine. Now, when I use the parameters found by mode_compute=1 as starting values and restart mode search with mode_compute=6, I get a positive definite matrix, with some minor decimal changes to the initial parameters. Would you happen to have a guess as to why this is?
Without seeing the specific case it’s impossible to tell. But the way
mode_compute=6 computes the covariance matrix is very different from other optimizers. The latter try to compute the inverse Hessian, the former directly uses the covariance of draws from an MCMC.
Could I ask another question. I just finished running Bayesian estimation: 1 chain with 50mln draws, acceptance rate around 26% and stable midway through. Geweke test cannot reject the null of mean equality for any parameter.
Is it correct to claim in this case that the mode has been successfully estimated (regardless of initial conditions)?
Where are the results for the estimated mode stored (is it filename_mh_mode or somewhere else)? Is there an equivalent of 90% confidence band for the estimated mode in a similar way that is reported for the mean (and if yes where is it stored)?
Thank you and apologies if these type of questions have already been answered.
Could I ask a few questions? First, why is the sign of “fvalue” in the mode file always of the opposite sign to that of mh_mode (or it is just with my estimation)? Second, with uniform priors, is it correct to say that the variable is “insignificant”, if the estimated coefficient is pushed to zero?
Thanks Johannes. Could I ask another question? Say the parameter on (0,1) interval is theoretically identified, but the identification is weak. Beta prior results in posterior that fully coincides with Beta. Is it correct to go for uniform prior in this case (it peaks at the border value though). Thanks again.
If the identification is so weak that the prior and the posterior coincide, I would always go for an informative prior.
Could I clarify another point with Dynare? After Bayesian estimation is complete, a set of figures is produced (irf, smoothed series etc). Are these figures based on the estimated mode after the estimation is finished running or on the initial mode found by optimization?
Neither. Bayesian objects are moments across parameter draws, e.g. the posterior mean is the mean across draws.