Thank you. For (1) does yes, mean that use"logdata" ?
You can write the logged and deterended variable name for example K .
As we know we have logged and deterended this variable before and we work with a demeaned K for a log-linear DSGE model.
In varobs command you can write K
This K has logged and deternded and demeaned before for example with one-sided hp filter for a log-linear DSGE model.
Thanks and do I still need to specify “logdata” in estimation command?
No, as I said when for example you work with a log-linear DSGE model and suppose that you logged observable variables such as C , I ,G and so on and demeaned them you can write only in varobs command
varobs C I G
You must have a datafile with header names of C , I , G in you datafile.
Altough I recommend you, another method for this issue.
In var command in Dynare write your observable variables names such as C_obs , G_obs, I_obs and so on.
In varobs command in Dnare write these variables names such as
varobs C_obs G_obs I_obs ;
In your datafile for example in a excel file your variables header names must be the same.
C_obs G_obs I_obs
We before logged and deterended and demeaned of these variables with one-sided HP filter before enter them into the Dynare file of our excel datafile.
Altough you shoud write measurement equations in model block for the observable variables.
For example
C_obs= C ;
G_obs=G ;
I_obs=I ;
And in varobs command you should write
varobs C_obs G_obs I_obs ;
C_obs is cycle of consumption that we derived with one-sided HP filter.
In total you can see for more information Professor Pfiefer’s very good lecture about this issue.
Thank you Eisa. This is what I thought. Grateful for help.
I have five different tutorial videos for DSGE estimation and how to enter observable variables in the model ( about 21 hours education).
Unfortunately my videos are in farsi language and are not in english language.
Although simulation works but parameter estimation throws an error “One of the eigenvalues is close to 0/0 (the absolute value of numerator and denominator is smaller
than 0.0000!”. I am not able to figure, what might be going wrong despite giving initial values.
Request you please help.
Altough I did not see your model , but probably in one of your equations you have a ratio that when dynare solves the model this ratio is 0/0
In dynare by default as I know when an eigenvalue is less than 0.000001 dynare assumes this number as zero. Theferore you encounter with a ratio in one of your model’s equations 0/0 that is undefinite in mathematics.
In some cases this ratio may be steady state values of two different variables, and if you set 0 as steady state therefore in solving the model you may encounter an error in dynare output results.
Thank you. i am not able to figure out what might be creating the issue. As for steady state values, none of them is 0 but one is negative.
I indicated steady state only as an example.
In your model steady state is not zero but as Dynare output results shows eigenvalues is 0/0 and this issue causes the problem.
I did not see the entire model.
identification analysis says
The model does not solve for prior_mean (info = 7: One of the eigenvalues is close to 0/0 (the absolute value of numerator and denominator is smaller than 0.0000!
If you believe that the model has a unique solution you can try to reduce the value of qz_zero_threshold.)
resid;
steady( qz_zero_threshold = 1e-20 ) ;
check;
model_diagnostics;
Test the above method. Altough if you have enough time check the model and it’s details again carefully.
The DSGE models are very sensitive and even a small drawback may cause an error when you run the model.
You should have precision when you solve and calculate steady state and when you write equations in dynare. Parameter values , steady state calculations and writing the model are very important in DSGE models. Any small drawback may cause an error in solving the model. All of the above mentioned steps are very important.
It throws synatx error unexpected QZ_ZERO_THRESHOLD
and model diagnostics says:
MODEL_DIAGNOSTICS: No obvious problems with this mod-file were detected.
I re-checked the model for any errors but it still throws the following error when I try estimation:
Blockquote
======= Identification Analysis ========
Testing prior mean
The model does not solve for prior_mean (info = 7: One of the eigenvalues is close to 0/0 (the absolute value of numerator and denominator is smaller than 0.0000!
If you believe that the model has a unique solution you can try to reduce the value of qz_zero_threshold.)
Try sampling up to 50 parameter sets from the prior.
Identification stopped:
The model did not solve for any of 50 attempts of random samples from the prior
Estimation using a non linear filter!
ESTIMATION_CHECKS: There was an error in computing the likelihood for initial parameter values.
ESTIMATION_CHECKS: If this is not a problem with the setting of options (check the error message below),
ESTIMATION_CHECKS: you should try using the calibrated version of the model as starting values. To do
ESTIMATION_CHECKS: this, add an empty estimated_params_init-block with use_calibration option immediately before the estimation
ESTIMATION_CHECKS: command (and after the estimated_params-block so that it does not get overwritten):
Error using print_info
Error using print_info
One of the eigenvalues is close to 0/0 (the absolute value of numerator and denominator is smaller
than 0.0000!
If you believe that the model has a unique solution you can try to reduce the value of
qz_zero_threshold.
can somone help? The revised mod file is attached
RBC_model_check_Mar22.mod (3.6 KB)
With the prior mean, the steady state values are
STEADY-STATE RESULTS:
b 18927.8
c 563.354
k 2772
h 16.0481
z 1
That does not seems sensible. You may want to provide analytical steady state values.
I tried as you suggested but then estimation runs into the following error:
POSTERIOR KERNEL OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the "mode" is not positive definite!
=> posterior variance of the estimated parameters are not positive.
You should try to change the initial values of the parameters using
the estimated_params_init block, or use another optimization routine.
MODE CHECK
Warning: Matrix is singular, close to singular or badly scaled. Results may be inaccurate. RCOND =
NaN.
What might be creating this problem?
I figured the problem. Does the collinearity identification diagnostics show that sigma has high correlation. Is statistic of 0.999 an issue or we just consider it only when statistic is 1.000?
Collinearity patterns with 1 parameter(s)
Parameter [ Expl. params ] cosn
rho [ sigma_z ] 0.9853418
psi_x [ omega ] 0.9989294
phi [ sigma_z ] 0.9703952
omega [ psi_x ] 0.9989294
sigma_z [ rho ] 0.9853418
Collinearity patterns with 2 parameter(s)
Parameter [ Expl. params ] cosn
rho [ phi sigma_z ] 0.9990379
psi_x [ omega sigma_z ] 0.9999992
phi [ rho sigma_z ] 0.9980707
omega [ psi_x sigma_z ] 0.9999992
sigma_z [ rho phi ] 0.9996500
-
Also, I run into “unbounded ensity” problem for my parameters.
-
Further, The mode_check plots show red dots on corners for certain parameters- I suppose thats a sign of unidentifiability?
-
mode_compute =6 works but others do not and there is convergence issue with mode_compute =6. I went through the following post to see what might be going wrong.
https://forum.dynare.org/t/unbounded-density/12075/13
Given above, will it correct to conclude that there is a “identifiability” issue in the model?
Apart from above, the model is very sensitive to priors. For most of prior mean and standard deviation (more sensitive), it throws the error:
POSTERIOR KERNEL
OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the "mode" is not positive definite!
=> posterior variance of the estimated parameters are not positive.
You should try to change the initial values of the parameters using
the estimated_params_init block, or use another optimization routine.
MODE CHECK
Fval obtained by the minimization routine (minus the posterior/likelihood)): 8.162760
Warning: Matrix is singular, close to singular or badly scaled. Results may be inaccurate. RCOND =
NaN.
What could be the reason for this?
Without the new set of codes, including the data, it is impossible to tell.