Hello…I am trying to run Bayesian estimation for my log-linearised model, and I am a bit conflicted on how I should transform data for some specific variables that I have in my model. I have read through Johannes Pfeifer’s ‘A Guide to Specifying Observation Equations for the Estimation of DSGE Models’ - and I believe this issue may have not been directly covered in the guide.

So I have a log-linearised model without a trend term. The variables that I have my doubts on are:

I would ideally like to use data on these variables in my estimation. I am not entirely sure the treatment I should give the raw data. For instance, my hunch is that for REER and exchange rate depreciation, I should simply take log of their values and demean the series. Is this correct?

For forex reserves and bank lending, I am not entirely sure if I need to first make them ‘intensive’ by taking per capita values and then apply a one sided HP filter and then demean them. Or should I simply take log of their ‘raw’ observed values and demean them?

FYI…my model will also be having the usual observed variables on inflation, interest rate, gdp, consumption, investment, etc. on which I will be using the technique outlined in the above mentioned guide under the ‘Models without a specified trend’ section.

In principle, the real exchange rate should be stationary so that you could use the demeaned log. In practice there might be persistent movements so that using growth rates may be advocated.

Exchange rate depreciation is already a growth rate and is not logged again. It is only demeaned.
3+4) Both series should typically be cointegrated with output. When you divide output by population, I would do the same for these variables. Regarding stationarizing them: use the same “filter” you use for output, i.e. either one-sided HP or first differences.

Thanks a lot for your comments. I have followed your instructions on the data. For output related variables I have calculated per-capita values, logged, deseasonalised, detrended (using one sided HP filter) and demeaned. For the others, logged, deseasonalised, detrended and demeaned. As per your manual, I have also added measurement errors in my observation equations.

I am however facing a problem running the code. Its running fine when I am running a calibrated version. However while running the Bayesian version -

Using mode_compute=6, I am getting the error message “There’s probably a problem with the modified harmonic mean estimator”. Using mode_compute=9, I am getting the message “(minus) the hessian matrix at the “mode” is not positive definite! => posterior variance of the estimated parameters are not positive.”

I did some searching of previous posts, and I understand that mode check plots need to be referred to. I find that for a number of my variables, the log-lik kernel is either a horizontal line, or in some cases, there are red dots on the X-axis (which I am guessing implies violation of Blanchard Kahn conditions for them).

But I am at bit of a loss as to where to work in my code/computation to rectify the problem. I am attaching my mod file, data file and the result message after running with mode_compute=9. I would really appreciate your feedback on this.

P.S. The GDP variables that I’ve taken are in current prices. So in the observation equations, I have equated them to nominal components - for eg. Export GDP has been equated to nominal Exports, which in the model will be export price x export demand. Is this by any chance the problem?

You should be running identification before estimation! It says

[quote]WARNING !!!
The rank of H (model) is deficient!

fi_f is not identified in the model!
[dJ/d(fi_f)=0 for all tau elements in the model solution!]

a_one is collinear w.r.t. all other params!
a_two is collinear w.r.t. all other params!
a_zero is collinear w.r.t. all other params!

WARNING !!!
The rank of J (moments) is deficient!

fi_f is not identified by J moments!
[dJ/d(fi_f)=0 for all J moments!]
[eps_dr,eps_c] are PAIRWISE collinear (with tol = 1.e-10) !
[eps_cap,eps_uip] are PAIRWISE collinear (with tol = 1.e-10) !
[rho_cap,rho_uip] are PAIRWISE collinear (with tol = 1.e-10) !
[theta_i,eta_c] are PAIRWISE collinear (with tol = 1.e-10) !

a_one is collinear w.r.t. all other params!
a_two is collinear w.r.t. all other params!
a_zero is collinear w.r.t. all other params!
rho_c is collinear w.r.t. all other params!
rho_dr is collinear w.r.t. all other params!

Thanks for this…I rectified my code accordingly (updated mod and data files attached). Identification tells me that 'Parameter error: The model does not solve for prior_mean with error code info=4.
info==4 %! Blanchard & Kahn conditions are not satisfied: indeterminacy.
Try sampling up to 50 parameter sets from the prior."

Does this mean that I am estimating too many parameters? Or I have my priors wrong? Or is there something else at play that I am missing?

P.S.After playing around with the code for a bit (primarily by either including or excluding some deep parameters in the estimation block), I am getting the same error as above, but with the message that all parameters are identified in the model (rank of H); and similar message for J. Still a matter to worry about I am guessing?

Nothing says your model must solve at the prior mean, which is just one particular type parameter combination. As the parameters are identified for other draws, everything should be fine in this respect.

Johannes - so just to clarify your comment - am I right in understanding that as long as my parameters are getting identified, I shouldn’t be worried with the error message “info==4 %! Blanchard & Kahn conditions are not satisfied: indeterminacy” ?

Yes, the prior means are specified independently of each other (it is not a joint prior). While you think that each individual value for the prior mean makes sense for the respective parameter, there is nothing that guarantees that the parameter vector resulting from the prior mean makes sense. It regularly happens that the prior mean is in the instability/indeterminacy region. As long as you are not doing model_comparison, this is just a nuisance (it may however signify bigger problems in your model). Just from the perspective of identification, this is not a problem.