I estimated a medium scale DSGE model with Bayesian Estimation in Dynare. One thing totally out of my control is that the searching algorithm stops at different points when I repeat it. I saw some discussions on the board saying that if you prefilter the data by HP filter, you could reduce the randomness, but I can’t get the point. Why prefiltering matters ?

Besides that, I impose some permanent shocks on the variables,i.e, the shocks on the growth rate, so I don’t think it is a good idea to HP filter my data.

Do you guys have some idea to reduce the randomness ? Any input is appreciated.

Hi, I have the same problem, and actually I am still not able to solve it. I have previously written that think about the HP filter because for a small model I could solve the randomness problem in that way. But know that I have a much bigger model, the pre-filtering strategy does not work.

However, the randomness comes from the fact that the optimizer csminwel randomly perturbs the search direction when it finds a cliff, i.e. when it has evaluated a set of parameter which violate the BK condition or when the likelyhood is not defined for that set.

I think that one of the cause of the fact that the ending points are different is that there are local maxima. Nevertheless, the estimation results should not be affected by the starting point of the MH, and these starting point are in any case goods starting points, although they are local maxima, because they are points with high density.

I do not know if the following suggestion can help you (it did not in my case), but you can try with mode_compute=6.

Alternatively you should think to “play” with your priors (that’s what I am trying to do!).

Let me know if you solve the problem…

Paolo

Hi, p.gelain:

```
I did two things to my model and now the outcome is relatively stable, so I guess there might be some hint about our discussion:
1. I replace all the expected shock terms, say, s(t+1) with rho_s*s(t).
2. Fix or narrow donwn the prior of the parameters with most negative variance suggested by mode_check, which is an option in dynare to check the property of the likelihood function locally.
```

I think the second step might contribute more, since the unstable outcome comes from the the bad shape of the likelihood function.

Best

Hi bigbigben

I solved my problems just changing the shape of one of my priors (only one out of 31!). It is impressive how a single parameter can affect everything.

I would add that I have some concerns about you suggestion of narrowing (or fixing indeed) the priors. Because if I am not wrong, a robustness check (among many others) of the estimation results is to increase the variance of the priors to detect possible identification problems (see Canova and Sala 2007 “Back to Square One: Identification Issues in DSGE Models”).

I would suggest you to use mode_compute=6 (I also suggest you to increase the number of iterations up to 100 000 in computing the mode), instead of the default way. It takes a bit more of time to find the stating points for the MH, but i think is more powerful.

Paolo

[quote]I solved my problems just changing the shape of one of my priors (only one out of 31!). It is impressive how a single parameter can affect everything.

I would add that I have some concerns about you suggestion of narrowing (or fixing indeed) the priors. Because if I am not wrong, a robustness check (among many others) of the estimation results is to increase the variance of the priors to detect possible identification problems (see Canova and Sala 2007 “Back to Square One: Identification Issues in DSGE Models”).

I would suggest you to use mode_compute=6 (I also suggest you to increase the number of iterations up to 100 000 in computing the mode), instead of the default way. It takes a bit more of time to find the stating points for the MH, but i think is more powerful.

[/quote]

I referred to number of iterations you can set when you use mode_compute=6 (options_.Opt6Numb = [default=20000]). You can find all the details in Dynare Wiki at

cepremap.cnrs.fr/DynareWiki/ … timization

Best regards.

Paolo

The thought about the randomness of the posterior mode is also related with the indeterminacy in the model. If the posterior mode happens to be very close to the indeterminacy region, it is very easy to run into a “cliff”. CSMINWEL will randomly perturb the searching direction, which will worsen the outcome. The MH algorithm is also affected, we will get very low acceptance rate, no matter how small the mh_scale is. Indeterminacy is really painful to estimate the DSGE model, I would like to know if there is any method to reduce the rounding errors in computation, since the deteminacy hinges on whether the mode of the eigenvalue is higher than one or not. If the coefficient matrix of the orignial model is ill-conditioned, rounding error is a problem.