If you have a model with a large indeterminacy region, how does that affect the overall estimation of that same model? Could one argue that whatever posteriors you get might be biased? For instance, if you already are forced to choose certain priors in order to avoid getting into the region of indeterminacy, can one really trust any posterior estimates of that model?

The indeterminacy region amounts to implicit prior truncation. It is usually not an issue for the posterior. Exemptions are cases where this region creates a valley that the MCMC is not able to pass through to get to the other side. But usually that region extends to infinity, so that is usually not an issue.
Where prior truncation will create problems is for the model comparison via marginal data densities. Here you need to correct for prior not integrating to 1.

Thanks for the reply Johannes. What if the model has a unit root for a large fraction of the parameter space and even simulating the model at the prior and the mean of the posterior solution it gives an explosive solution. In that case, one can’t really trust the estimation of the model can they?

Essentially how well could one trust these estimates if you have to randomly guess different values and combinations across the range of the posterior of the parameters to get a workable solution?

?
Any draw from the posterior must be workable. But you are having a joint posterior, so you cannot independently draw individual parameters. You need to take correlation in the posterior into account.

Yes, but I meant consider a simulation after the estimation and independent of it. Your estimation provides a certain set of posterior distributions for the parameter space. You set all parameters equal to the mean of the posterior. However that parameter set is not workable.

That is exactly what I was saying. You cannot easily take the mean of the individual parameters, because you would be neglecting parameter correlations in the posterior.

Ok I see. I have read multiple papers (in good quality journals) where they simulate the model at the mean of the posterior parameters. So what they are doing is technically incorrect? Also, when simulating a simple model where the parameters are calibrated, these calibrated parameters implicitly reference some previous research that has estimated these parameters. Thus, isn’t calibration simply setting a model at the mean of some estimated parameter that some paper has estimated?

Would you say that the only way to properly simulate an estimated model is to simulate the model at the entire distribution of the parameter space? I.e. plot an IRF for each accepted parameter draw combination during the estimation process. Thus, the IRFs would be described by a region

No. What I am saying is that the approach you read in those papers is not guaranteed to work. People find it convenient to work with the mean across parameters, neglecting the correlation between parameters within draws. Your paper is one where this difference seems to play an important role.