Unit roots in shocks (but not in theoretical moments)

I’m running an estimation, and after correcting for some factors that would prevent identification (i.e. increasing the ar component enough that there’s at least as many autocorrelations as there are estimated parameters) I am getting unit roots in the persistence parameters of the shock equations. I checked the eigenvalues and there are a couple that are close to unity (~1.12), but there are no infinite variances in the theoretical moments of the model.

What could be causing unit roots* in the estimated parameters (such as rhho_istar, for example) if the model itself is okay, and the data has all been detrended/de-meaned/seasonally adjusted and matches the observation equations in form (i.e. log-deviations from steady-state, where the steady-state is the long-run average of the detrended data)?

*Another strange outcome is that the household habit formation parameter [hh] is usually estimated to be very close to unity, with a divide-by-zero error happening at hh=1. The coincidence of these factors makes me think that either there is something wrong with the model, or the underlying factors within the economy being studied show that the economy is basically characterized by perfect persistence, which is a dubious result.

Files are attached.
(I’ve updated the standard errors of the shocks in EstimationTest5, just in case that was causing the problem. Testing that .mod file right now).

Thank you for your comments!
EstimationTest5.mod (7.74 KB)
varobs3.m (5.73 KB)
EstimationTest2.m (24.2 KB)

I can’t see anything immediately suspicious. Maybe your model puts large persistence into some unobserved variables to fit the data. This is hard to diagnose.

I had initially thought it was a problem of not having enough data, since the fewer data series I add the more closely the posteriors match the priors (or at least, the less often there are unit root posteriors or posteriors that cause divide-by-zero errors), but I’m running the model using 35 years (quarterly; 143 observations, compared to 52 previously) worth of Australian data and I’m getting similar problems.

The only other thing I can think of is in the choice of which variable to change to match the number of eigenvalues greater than unity.

With how I copied the model in (assuming I copied it in correctly) there was one more forward-looking variable than there were eigenvalues. I understand that Dynare uses an end-of-time-period methodology for stock variables, but none of the forward looking variables (aggregate consumption [cC], aggregate CPI [ppi], exchange rate devaluation [delttae], wage rate [wr], and home goods CPI [ppi_H]) are what I would consider stock variables. I selected which one to make contemporaneous (to make forward looking variables = eigenvalues) by looking for which one did not have any lags in the equation. Delttae is the only variable of the five that is never lagged, so in equation five (line 88) I set the expectation operator back one time period (-1), operating on contemporaneous delttae*.

Is this a valid method of selection for shifting variables to make forward-looking variables = eigenvalues, or is one of the other variables a more appropriate fit for Dynare’s methodology? Does having lags already existing in the model (i.e. an equation that has cpi(+1), cpi, and cpi(-1) all in the same equation) mean it should never be set back one time period (i.e. to cpi, cpi(-1), and cpi(-2))?

*As I wrote this, I realized that while I set back delttae one time period in this equation, I did not set back the other two instances of delttae. I am testing this change right now.
-----If I do this, I get the warning “The following endogenous variables aren’t present at the current period in the model: delttae,” so I am uncertain whether this is the correct model to shift, or if I’ve shifted it correctly across the model.

Your way of solving the Blanchard-Kahn issue is plain wrong. You cannot selectively alter the timing of some variables. There is one and only one unique correct timing.
Please check the timing in every single equation again.

We’ve checked over the timing a couple of times, and though we’re probably misunderstanding something we think we’ve got the paper’s explicitly written log-linearized model copied in correctly (we did find one error recently, but that’s been corrected and did not solve the problem). However, we’ve been sticking to the 32 equations explicitly listed as the model equations in the appendix, but there are a number of supplementary equations (variable definitions, etc.) that don’t have equation numbers and seemed to have been included mostly for explanatory purposes that we never incorporated. The other alternative is that not allowing for time-varying steady-states is the problem, but the source paper strongly implies that they did not allow for time-varying steady-states either (which I understand is wrong, but this is mostly a replication that we’re doing for right now, and we will correct for that later), which suggests that this isn’t the problem in terms of Blanchard-Kahn conditions. Outside of these considerations I’m not sure what other options we have, which is why I assumed that Dynare’s timing methodology (using end-of-period instead of beginning-of-period for stocks) might be the solution.

We certainly appreciate all the help, and we’ll keep looking into this.

When doing a replication exercise, go step by step. Before estimating a model, make sure you can replicate the results using a calibrated model with the parameters in original paper. I take your description in this post to say that you had trouble getting the simulated model to run using the equations given in the paper and then resorted to selectively changing some timing assumptions so that the model ran. All I am saying is that there is one unique correct timing. Either your implementation is wrong (which you deny as you say you have checked everything) or the equations given in the original paper are wrong. This cannot be excluded either. When you are not sure that the model is correct, you should not do estimation. There must be some elements in the paper where you can verify if your model is correct (say IRFs at the posterior mean). When you cannot find the wrong timing, start with a simplified version of the model that runs and slowly add features. Believe me, I know how painful replications can be.