Dear All,
I would like to ask you if Dynare is able to recognize indeterminacy regions. I mean, in a Bayesian estimation, calculating posteriors, could you discard all parameters draws in a indeterminacy regions? If yes, how could it happen? Is there an algorithm or a penalty value associate to the parameter draws? Or if no, all draws are automatically in a determinacy regions if one sets good priors.
Thanks so much!
Best,
MatlabNerd
All draws that do not yield a unique, determinate, finite, realvalued solution are discarded during Bayesian estimation. See An/Schorfheide (2007) for more on this. This is achieved by using a flag that indicates a bad draw (essentially Inf for the likelihood). The penalty function approach is only used for modefinding as not having a cliff like Inf is preferable.
Discarding draws of course leads to problems with computing the marginal data density as the effective prior does not integrate to 1. Thus, a good prior should ideally assure that all draws from it result in a unique, determinate, finite, realvalued solution. However, as long as your are not doing modelcomparison, this should not be problematic. Again, see An/Schorfheide (2007) and the subsequent discussion in that journal issue.
Dear jpfeifer,
Could you please provide some literature applying and mentioning the penalty function approach when searching for the posterior mode?
Regarding your argument “Discarding draws of course leads to problems with computing the marginal data density as the effective prior does not integrate to 1”, I have a question. If we know the analytical form of the boundary between determinacy and indeterminacy regions, one could renormalize the unconditonal prior so that the effective prior for determinacy region still integrate to 1. Is this right? I think Lubik and Schorfheide (2004, AER) did this. They specify a prior for the response coefficient of inflation in the Taylor rule as Gamma distributed, which encompasses both determiancy and indeterminacy regions. Specifically, in the estimation conditioning on determinacy, when a draw is from the indeterminacy region, it is discarded. When a draw is from the determinacy region, its prior is just the unconditional prior divided by the probability of determinacy region. In this way, I think the effective prior integrate to 1, right? If so, “a good prior should ideally assure that all draws from it result in a unique, determinate, finite, realvalued solution” may not be necessary. You also mentioned “as long as your are not doing modelcomparison, this should not be problematic”. I don’t understanding what is the problem of doing model comparison in this circumstances. Lubik and Schorfheide (2004, AER) did model comparison in this framework.
By the way, is there a plan for Dynare to handle the indeterminacy in estimation as in Lubik and Schorfheide (2004, AER)?
Thank you!
Bing
Dear jpfeifer,
Thanks for your clarifications!
Regarding your responses, I would appreciate it very much if you could provide further clarifications on the following question:

For curiosity, does Dynare use penalty method in the modefinding stage? If so, which options of the modefinding are using it? mode_compute = 4, 6, 9?

Suppose I estimate the model of Lubik and Schorfheide (2004) under determinacy in Dynare and the prior are set in the same way as they do. The unconditional prior for the response coefficient of inflation in the Taylor rule is Gamma (1.1, 0.5), which means that this prior encompasses both determiancy and indeterminacy regions. Is this a “good prior” in the sense of Dynare? I am asking because this prior does not assure that all draws from it result in a unique, determinate, finite, realvalued solution. If it is not a “good prior”, how should we specify the prior for this parameter in Dynare?

Dynare always reports log marginal data density after the estimation. In the above case where draws for indeterminacy are discarded and there is no renormalization, is the log marginal data density reported by Dynare a correctly computed number?

According to your argument on the renormalization of prior, I think it matters for the computation of marginal data density. But for MCMC, with or without renormalization does not matter. Under renormalization, the log prior is just the log of unconditional prior minus the log of a constant (the probability of determinacy). In the MCMC, because we always calculate the difference of posterior kernel between a draw and a candidate draw, the log of constant actually cancels out. So what matters is the unconditional prior. Am I right?
Thank you!
Dear jpfeifer,
Could you please provide some clarifications for me? That would be very helpful!
Thanks a lot!
Thank you very much for the clarifications, Dear jpfeifer!