I am estimating a model where innovations of shocks are correlated, and the correlation coefficient is one of the parameters estimated. When I run the identification command dynare crashes, with error message
Error using .*
Matrix dimensions must agree.
Error in identification_analysis (line 224)
deltaM = deltaM.*abs(params’);
As far as I can see, this is because correlation coefficient is not included when matrix JJ is constructed by function getJJ on line 82 of identification_analysis.m, and thus deltaM has 1 row less than params.
I know that I can introduce correlated innovations using eps1=e1+phie3 and eps2=e2+phie3 and the identification command works fine then, but in this case it reports the measure of identification for phi and not directly for the correlation coefficient. So I was wondering if there is a way how to run identification with eps1, eps2 directly without introducing e1, e2, e3, phi.
correl_shocks_est_dynare_corr_command_iden.mod (833 Bytes)
indeed the identification does not work with the correlation coefficient estimated. This is something in the todo list for the future, but so far the only thing that works is to reparameterize the shocks within model definition.
I think you could also use a choleski to get it? this may also allow to map the choleski to the original correlation coefficient?
OK, thanks! I will try to think about a way to do it using Cholesky decomposition.
OK, so it seems that it can be done using a model-local variable.
If the innovations are e1=sigma1eps1 - phi1eps3 and e2 = sigma2eps2 - phi1eps3 with correlation coefficient rho = -phi^2 / (sigma1*sigma2+phi^2) then identification command (at least for posterior mean) works if I write
#phi1 = sqrt( -sigma1sigma2rho/(1+rho) );
v1 = rho11v1(-1) + rho12v2(-1) + sigma1eps1 - phi1eps3;
v2 = rho12v1(-1) + rho11v2(-1) + sigma2eps2 - phi1eps3;
rho, -0.2, beta_pdf, 0, 0.3, -1, 1;
Does this look OK to you, rattoma?
correl_shocks_estim.mod (1.9 KB)