Very Large t statistics in the estimated DSGE model

Dear all.

I have estimated Cho and Moreno’s (2006) new IS-LM model using data for Turkey and I ended up with very large (such as >21.000) t stat values following MLE estimation. I have several questions:

  1. Could this be due to the non-stationarity of one or several variables despite the fact that the ADF tests indicated otherwise?

  2. Could it be possible to obtain bootstrapped standard errors using Dynare since asymptotic standard errors would be inaccurate with a sample of 66 observations?

  3. In the Dynare document I could not see anything about the declaration
    of nonstationary values as unit_root_vars. Supposing that this is done, then what is the use of setting lik_init=2 during the estimation (MLE) since already the variables are defined as unit root?

  4. How can I plot confidence intervals around the IRF’s ?

Thank you for any suggestions, merci d’avance.

[quote=“Cem279”]Dear all.

I have estimated Cho and Moreno’s (2006) new IS-LM model using data for Turkey and I ended up with very large (such as >21.000) t stat values following MLE estimation. I have several questions:

  1. Could this be due to the non-stationarity of one or several variables despite the fact that the ADF tests indicated otherwise?
    [/quote]

I don’t see how. Remember that the t statistic corresponds to the null hypothesis that the coefficient equals zero. So, this one maybe far from zero. I don’t understand what worries you. If you have very very large t statistics and zero standard error, then you shoul worry that the optimization stopped agains a boundary and you can’t do the tests the usual way.

Yes, but you have to write the bootstrap code yourself. Baiscally taking the point estimates, using them to calibrate the model. Then write a loop that simulate a sample using stoch_simul(periods=66,…) then estimate the model back for this sample, saving t he results and looping again

There is an entry for unit_root_vars in the estimation section of the manual. You shouldn’t use lik_init anymore.

There is no provision for it, yet.

Best

Michel

thank you for your prompt reply.

You had said:

[quote]
If you have very very large t statistics and zero standard error, then you shoul worry that the optimization stopped agains a boundary and you can’t do the tests the usual way.

You should try to understand why you are hitting against a boundary in the parameter space. One way is to start with one observed variable, then adding the other one by one to try to understand where the difficulty pops up

Kind regards

Michel