HP-filter data and theoretical moments

I’m calibrating to log-hp filtered second moments. I’m aware that in order for the theoretical moments to be comparable with log-hp filtered data moments, it’s recommendable to also hp filter the (logs) model with stoch_simul(hp_filter=1600, order=1), which compute theoretical hp-filtered following the outline of Uhlig (2001) according to the Dynare manual (4.6).

Now, when performing moment-matching in my model and following the previous process, I think that applying the hp-filter to the model is difficulting the matching of some moments. For instance, my data gdp first-order autocorrelation is about 0.85, when I use a first-guess calibration parameter vector the no-hp filtered theoretical moment for \rho(gdp_{t},gdp_{t-1}) is about 0.9, and when I hp-filter the theoretical moment is about 0.7. The thing is that for this second case, when calibrating the model struggles a lot for increasing this autocorrelation to 0.84, where the identified parameter (which is the autocorrelation parameter for TFP ar(1) process) wants to get values too close to a unit-root, which is wrong since my model does not contemplate unit root to TFP shocks.

Also, with other moments the model struggles to get correct or close to correct theoretical moment values. I’d appreciate if you could give me some advice in how to improve my calibration by moment matching. I think I’m close, I’ve tried trying different initial-guess parameter values, also I’ve dropped from calibration some moments that do not improve its distance from the data ones after many iterations.

Some comments would be appreciated. Thanks!

PD: Also I would want to check if this mentioned Uhlig (2001) reference for computing hp-filtered theoretical moments is this same one. Thanks.

  1. Unfortunately, this sounds like your model is incapable of reproducing the data dynamics. Often there is not much you can do except for changing the model to have more persistence.
  2. No, the reference is not the Ravn/Uhlig paper. It’s Toolkit for Analysing Nonlinear Dynamic Stochastic Models Easily - Oxford Scholarship
1 Like

Hi. I was checking the Uhlig reference, and I’m wondering if that frequency-domain technique in which the spectral density of the state space of the model solution is calculated (therefore allowing for obtaining filtering moments without simulations), is the same technique used by Dynare when solving the model at 2nd order and requesting theoretical moments. I mean if theoretical moments are obtained the same way in 1st and 2nd order stochastic solution in Dynare. Thank you.

Without pruning, the answer is yes. Dynare provides a second order accurate approximation to the moments. For that the linear decision rules are sufficient (combined with what Uhlig suggests)

1 Like

Thanks! Also another thing comes to my mind, how exactly does Dynare compute irfs at 2nd order when not using any simulation?

IRFs at order=2 are always simulation-based, i.e. GIRFs.

1 Like

Following that, does it make any difference in the resulting IRFs if I just use one shock (i.e. zero sdterr for all but one shock), or is that wrong? I’m aware Dynare orthogonalizes every shock, but how can I make sure that no matter which other shocks I use at the same time the resulting IRFs are pure?

I ask this since in my irfs analysis routine, I want to activate each shock at a time when computing irfs, but I don’t know if this would be wrong.

Thanks.

That would generally be wrong. The standard deviation of all shocks will affect the point of the state space where the IRFs are computed.

1 Like