Indeterminacy and draws

Dear All,

I would like to ask you if Dynare is able to recognize indeterminacy regions. I mean, in a Bayesian estimation, calculating posteriors, could you discard all parameters draws in a indeterminacy regions? If yes, how could it happen? Is there an algorithm or a penalty value associate to the parameter draws? Or if no, all draws are automatically in a determinacy regions if one sets good priors.
Thanks so much!

Best,

MatlabNerd

All draws that do not yield a unique, determinate, finite, real-valued solution are discarded during Bayesian estimation. See An/Schorfheide (2007) for more on this. This is achieved by using a flag that indicates a bad draw (essentially -Inf for the likelihood). The penalty function approach is only used for mode-finding as not having a cliff like -Inf is preferable.

Discarding draws of course leads to problems with computing the marginal data density as the effective prior does not integrate to 1. Thus, a good prior should ideally assure that all draws from it result in a unique, determinate, finite, real-valued solution. However, as long as your are not doing model-comparison, this should not be problematic. Again, see An/Schorfheide (2007) and the subsequent discussion in that journal issue.

Dear jpfeifer,

Could you please provide some literature applying and mentioning the penalty function approach when searching for the posterior mode?

Regarding your argument “Discarding draws of course leads to problems with computing the marginal data density as the effective prior does not integrate to 1”, I have a question. If we know the analytical form of the boundary between determinacy and indeterminacy regions, one could renormalize the unconditonal prior so that the effective prior for determinacy region still integrate to 1. Is this right? I think Lubik and Schorfheide (2004, AER) did this. They specify a prior for the response coefficient of inflation in the Taylor rule as Gamma distributed, which encompasses both determiancy and indeterminacy regions. Specifically, in the estimation conditioning on determinacy, when a draw is from the indeterminacy region, it is discarded. When a draw is from the determinacy region, its prior is just the unconditional prior divided by the probability of determinacy region. In this way, I think the effective prior integrate to 1, right? If so, “a good prior should ideally assure that all draws from it result in a unique, determinate, finite, real-valued solution” may not be necessary. You also mentioned “as long as your are not doing model-comparison, this should not be problematic”. I don’t understanding what is the problem of doing model comparison in this circumstances. Lubik and Schorfheide (2004, AER) did model comparison in this framework.

By the way, is there a plan for Dynare to handle the indeterminacy in estimation as in Lubik and Schorfheide (2004, AER)?

Thank you!

Bing

  1. I can’t. This is just a practical consideration of using an unconstrained Newton-type optimizer. As detailed in e.g. en.wikipedia.org/wiki/Penalty_method the constrained problem is effectively replaced by an unconstrained one through penalizing invalid parameter combinations. This seems to be a standard approach in the numerical optimization literature.

  2. Yes, you could renormalize to solve this issue, but Dynare does not (yet) do this. So you would still need a “good prior” in Dynare.

  3. If you don’t adjust the prior so that it integrates to one, the marginal data density computations will be wrong. As they are used for model comparison, the issue pops up here.

  4. Handling indeterminacy in estimation as in Lubik and Schorfheide (2004, AER) is planned but has not gone far yet, see github.com/DynareTeam/dynare/issues/111

Dear jpfeifer,

Thanks for your clarifications!

Regarding your responses, I would appreciate it very much if you could provide further clarifications on the following question:

  1. For curiosity, does Dynare use penalty method in the mode-finding stage? If so, which options of the mode-finding are using it? mode_compute = 4, 6, 9?

  2. Suppose I estimate the model of Lubik and Schorfheide (2004) under determinacy in Dynare and the prior are set in the same way as they do. The unconditional prior for the response coefficient of inflation in the Taylor rule is Gamma (1.1, 0.5), which means that this prior encompasses both determiancy and indeterminacy regions. Is this a “good prior” in the sense of Dynare? I am asking because this prior does not assure that all draws from it result in a unique, determinate, finite, real-valued solution. If it is not a “good prior”, how should we specify the prior for this parameter in Dynare?

  3. Dynare always reports log marginal data density after the estimation. In the above case where draws for indeterminacy are discarded and there is no renormalization, is the log marginal data density reported by Dynare a correctly computed number?

  4. According to your argument on the renormalization of prior, I think it matters for the computation of marginal data density. But for MCMC, with or without renormalization does not matter. Under renormalization, the log prior is just the log of unconditional prior minus the log of a constant (the probability of determinacy). In the MCMC, because we always calculate the difference of posterior kernel between a draw and a candidate draw, the log of constant actually cancels out. So what matters is the unconditional prior. Am I right?

Thank you!

Dear jpfeifer,

Could you please provide some clarifications for me? That would be very helpful!

Thanks a lot!

  1. For all mode finders. It is implemented in the likelihood computation, not the mode-finder.

  2. If you want to do model comparison directly in Dynare, this is not a good prior. You should only assign prior mass to determinate solutions.

  3. No, it will not be correct. The more draws are discarded the bigger the potential error becomes.

  4. Yes.

Thank you very much for the clarifications, Dear jpfeifer!