Bayesian estimation _ mode_computation

Hi Prof. Pfeifer an to all,

I got a question related to bayesian estimation of DSGE:

I estimate the mode using differnet algorithms through ‘‘mode_compute’’ option and I try different algorithms sequentially with mode_compute=6 and then mode_compute=8 being (sequentially) the last two ones.

When I compare the Log data density (Laplace approximation) :

the estimation with '‘mode_compute=6 ‘’’ has a higher LOG DATA DENSITY = 4161
…than the estimation with '‘mode_compute=8 ‘’’ finds the mode over the ‘‘FILE_mode.mat’’ estimated with mode_compute=6.
the estimation with '‘mode_compute=8 ‘’’ has a LOG DATA DENSITY = 4153 … which is lower.

everything else the same.

Now given equal odds the rule is to choose the one with the higher Log data density. But in my case mode_compute=8 already finds the global mode after the mode_compute=6 has been run so I am tempted to choose the last one.

My question is :
in order to run the mh_replications should I stick to the previous estimation (the one before the last with mode_compute=6) which has a higher Log data density
or
should I select the last one (with mode_compute=8) which has already optimized over the previous one, ?

thanks

You are looking at the wrong statistic. The marginal data density is the likelihood of the data, given the model. In principle, it has nothing to to with the posterior mode (except that the Laplace approximation approximates around the mode), because the parameters are integrated out. You need to compare the posterior density, i.e. the density of the parameters given the model and the data:

[quote]Final value of minus the log posterior (or likelihood):-569.338740
[/quote]

thanks for the quick reply Prof. Pfeifer:

A bit confused though. I am citing below one of your comments which can be found under the following link:

In the cited text above one asks you regarding model comparison using

and you approve. Is that right?
I am confused : How can I calculate the statistics you mention: IIs it already provided by dynare (I do not seem to find such a statistic)

in my mode computation.

Again, you are not doing model comparison, but mode-finding. Your model is still the same. Model comparison is done via the marginal data density as indicated in the post you reference. Mode-finding proceeds by finding the highest posterior density. After mode-finding, Dynare will tell you this value. In the unstable version, you will get in the output window:

Thanks a lot Prof. for your patience.

I have done all computations in Dynare 4.4.3
any chance I can get the value somewhere with 4.4.3 ?
or should I LOAD the different modes with (mode_file =MODENAME.mat ) and run (with mode_compute=0) in the unstable version and get the value of minus the log posterior (or likelihood) ?

In your log-file, you should have

that is already in the main output window as well.

Thank you very much for your help Prof. Pfeifer.

Hi again Prof. Pfeifer and to all,

I am doing the Bayesian estimation of NK framework with some financial frictions.

I first did the mode_computation and got the same results with mode=6,8,9

But when I run the MCMC (replications) I get good disgnostics and reasonable acceptance rate (22-23%)
BUT I get two problems:

  1. the posterior distribution looks odd : the green vertical line (the mode) does not intersect at the peak of the black line (distribution) for a couple of parameters. in some cases it is way apart from it.
  2. in another case I also have the problem that the second set of replications (first set is mh_replic=30000; second set mh_replic=30000) gives a much lower or higher acceptance_rate compared to the first although the ''mh_jscale=did not change.

I wonder what could be the underlying problem, and how I could fix it (no identification problem: ident.test ok) ?
I appreciate if someone run accross same problems and share the opinion.

Please do a trace plot of the parameters you think are problematic and post the results. See Pfeifer (2014): An Introduction to Graphs in Dynare at sites.google.com/site/pfeiferecon/dynare for the syntax.

Thanks a lot Prof. Pfeifer for agreeing to look further into my question.

I get different acceptance ratio depending on the number of MCMC replicatations I run so It took me a while to make sure I get acceptance ratio of 24 % on the 2 blocks.
I am attaching the Posterior distributions and the TRACE plots in the ZIP file below as you asked Professor.

Ps. in addition, I got the following warning .

Very much appreciate that you agreed to look into it.

Best
diag.zip (945 KB)

You clearly need much more draws. At least 200,000.

Thanks a lot for the quick reply Prof. Pfeifer,
In a separate estimation with twice as many observations I get a similar problem, though no warning as before (in the attached ZIP folder).

Is that the only reason?

(Obviously that’s something can be done easily) …
diag2.zip (1 MB)

The warning you can ignore. It happens for one draw, which is nothing to worry about. But you really need more draws.

Many Thanks Professor.

HI again to all and to Prof. Pfeifer,

Following up with the conversation with Prof. Pfeifer,
I have reached 300 replications on my Bayesian estimation.

The initial problem that I had seems to persist, namely, that while the diagnostics seem fine, the mode (grren line) does not intersect at the peak of the posterior distribution even after 300,000 replications .

I wonder how I could possibly fix that ?
I have attached the posteriors in a zip file.

Ps. the mode has been computed with 8,9,6 and got the same results.
diag_300.zip (1.01 MB)

What exactly is your problem? Looking at the prior posterior plots and the trace plots, the estimation results look quite good.

Looking at the graphs which I am citing below :

the green line does not intersect at the peak of the posterior distribution.

Isn’t it supposed to be that way ? that is the vertical (green ) line cut the posterior )black) at the peak ??
Alternatively how would I interpret that the posterior mode (greeen line) is not at the peak of the distribution ???

thanks a lot for your time Prof. Pfeifer !

This can sometimes happen, but given the rather small differences this is not a reason to worry in your case (the distributions are made using a kernel density estimate and the peak can be somewhat off in that case). Usually when there is a serious problem, the green line is far away and the trace plots show serious drift. None of this is the case here.

Many thanks for the comments again Prof.

I think i might have the situation similar to what you are mentioning: that is a drift in the trace plots.

I have simultaneously estimated the **same model **with a shorter sample of the same observables.
It seems there is a problem with the traceplots (i.e parameter ‘‘thetP’’, and ‘‘epsIS’’ )

Is there much I can do about it ?
probably raise the replications even further (currently 550) ?
diag_SUBsample_550.zip (1.11 MB)

Not exactly. What you see in those two graphs is an example of bimodality, where the MCMC correctly explores both regions. You still might want to use a longer chain to properly sample from both modes. But there is nothing here to suggest that the chain has not yet converged to its ergodic distribution.
Have you looked at a trace_plot of the posterior density?