Some questions about identification

Dear all,

I have a few questions on parameter identification. I appreciate it very much if anyone could provide me some hints or suggestions!

  1. In the attached file “identification.rar”, I include a mod file for identification analysis on my model. If I run the code, and for the analysis on the prior mean, it says two parameters are PAIRWISE collinear (with tol = 1.e-10) ! So the ranks of H (model) and J (moments) are deficient. However, when I do Monte Carlo testing, the identification problem disappears and all the parameters are identified. I am wondering what is going on. Should I worry about the identification problem or just ignore it?

  2. The second question is for a better understanding of parameter identification. In the attached file “identification2.rar”, I include two figures. I have done the identification analysis and all the parameters are identified. However, in figure1.eps that is from mode_check, gamma_Q is completely blue. What does it mean? Does it mean the likelihood is completely flat? If so, does it mean that this parameter is not identified from a MLE perspective? But since prior is informative, this parameter is identified from a Bayesian perspective, right? In the identification package in Dynare, does the lack of identification go from a Bayesian perspective or a MLE perspective? For other parameters like tau, it seems that the blue and green lines overlap. This means that likelihood and posterior kernels are the same. But there is informative prior that is not reflected in the figure. Why does this happen? In the upper panel of identification.eps, gamma_Q has huge red bar pointing to the negative side. I guess there is some kind of link between this figure and the previous figure. My question is how to better understand the figure of identification strength?

Thanks a lot!
identification2.rar (9.61 KB)
identification.rar (2.66 KB)

Dear jpfeifer,

Could you please provide some clarifications for me? That would be very helpful!

Thanks a lot!

I need more time to look into this.

OK. Please take your time. I really appreciate your help!

  1. You are neglecting the different scale in the mode_check-plot. The likelihood line is the green line at the top of the figure that merges with the boundary of the subplot. It is the massively informative prior that gives the blue line the shape.

Regarding the interpretation of the identification strength graph, please see Pfeifer (2014): “An Introduction to Graphs in Dynare” at sites.google.com/site/pfeiferecon/dynare. The identification strength is generally evaluate for the posterior with Bayesian estimation and for the likelihood with ML. Thus, the prior plays a role.

Hi there:

  1. in this case, there is surely an issue of weak identification at least in a portion of the prior space. The tolerance level for checking the rank is set at 1.e-10 and the model can sometimes be above or below that threshold. In the MC checks, the rank test is passed, but one interesting plot is

_MC_HighestCondNumberMoments_SA_1.eps

there we map the highest condition numbers, and the black cumulative distribution fot rho_z points out that the smaller the persistence of zShock, the higher the probability of getting weak/no identification. For example if you change the prior mean of rho_Z to 0.85, then the collinearity at tolerance 1.e-10 disappears;
Another interesting plot for the MC tests is the identification strength done for the parameter combination providing the highest condition number: in the plot

_ident_strength_Draw_with_HIGHEST_condition_number.eps

you can see that the Taylor rule params are still subject to weak/lack of identification: eps_R, rho_R, psi_R_pi, psi_R_y. Again, I suspect that if you impose a large prior for rho_Z, this borderline behavior may disappear;

  1. some more comments about the tests: the rank/information matrix tests are always done on ML basis, be it Bayesian or ML context. However, the identification strength plots for the Bayesian case also show the red bars, where the strength is normalized by the prior std, thus combining ML with prior info. For gamma_Q, the prior std is very small, providing such a big negative spike of the (log-) identification strength. This further confirms that the prior information largely dominates over the likelihood for this parameter.

Dear jpfeifer and rattoma,

Thank both of you for the comments. They are very helpful for my better understanding of the identification analysis.

rattoma, would you please recommend some references that are related to what you have said regarding the identification, especially under monte carlo option and considering condition number? Also, when I change the prior mean of rho_z to 0.85, as you said, the collinearity at tolerance 1.e-10 disappears. But for monte carlo testing, I got the following error that does not appear when the prior mean of rho_z is 0.5:

Undefined function or variable “new_index”.
Error in gamrnd>best_1978_algorithm (line 401)
index = union(new_index,INDEX(Jndex));
Error in gamrnd (line 128)
rnd(double_idx(big_idx)) = best_1978_algorithm(a(double_idx(big_idx)),b(double_idx(big_idx)));
Error in betarnd (line 53)
rnd = x./(x+gamrnd(b, ones(mb,1)));
Error in prior_draw (line 116)
pdraw(beta_index) = (p4(beta_index)-p3(beta_index)).*betarnd(p6(beta_index),p7(beta_index))+p3(beta_index);
Error in dynare_identification (line 365)
params = prior_draw();
Error in identification (line 246)
dynare_identification(options_ident);
Error in dynare (line 185)
evalin(‘base’,fname) ;

Is this a bug in Dynare?

Thanks a lot!

Hi,
I think you just need to set a smaller std to 0.1, since 0.2 is too big for 0.85 prior mean in the beta prior distribution.
Concerning references, you could have a look at:
bookshop.europa.eu/en/european-community-project-monfispol-deliverable-3.1.2-pbLBNA25032/
cheers
Marco

Thanks, Marco! Setting a smaller prior standard deviation works. I did not notice the unreasonably high s.d. for rho_z given its mean 0.85. Also thank you for providing the reference.

This thread has been very useful. I have few questions:

  1. I was having similar problems with a model I am working with. I played around with the priors and now the parameters are identified at prior_mean and also according to MC testing. However, once i estimate the model and then do the identification test immediately after, i start having the problems again with the MC test. Some parameters are not identified anymore. Why is this happening?

Is it because now the MC test is being done on posterior distribution? Also, parameters also suffer from unidentification at posterior_mean.

Marco explained very clearly how to correct for this in the prior distribution case. Is there any way of correcting this problem with the posterior case?

  1. When parameter are identified according the MC test but not at prior_mean, what should one do? Can one proceed with estimation? What will be the cost of ignoring poor identification at such point in parameter space?

  2. This question is about the ‘MC mean of sensitivity measures’ graph. If the blue bars (representing moments) are very small for most parameters in model A relative to model B, given A is nested in B, does it say anything about the goodness of model B over model A? Parameters in both models are identified.

in other words, is there any way of suggesting which of the two models is better identified?