# Measurement equations and steady state growth

I’m trying to estimate the SOE model with incomplete pass-through and habit formation of Justiniano & Preston 2004, but I’m having trouble figuring out how to incorporate the steady state growth rates in the measurement equations. The data I’m using is quarterly and consists of 10 variables: domestic real GDP per capita, domestic interest rate, real exchange rate, nominal exchange rate, domestic CPI, domestic imports price index, domestic exports price index, foreign real GDP per capita, foreign interest rate, and foreign CPI. All the data is in terms of levels, except for the domestic and foreign interest rates, which are expressed as decimals.

I’ve tried to follow Smets & Wouters (2007) and the following threads, but I want to check that what I’m doing is correct.

``````https://forum.dynare.org/t/general-rules-for-measurement-equations/1460/1
https://forum.dynare.org/t/simple-model-unkown-error/1370/1``````

I’ve taken log first differences and demeaned all of the observables (is this correct?). Right now I’m trying to match them with the variables in the model. I’m using the following measurement equations, where the me’s are measurement errors.

``````        // nominal interest rate
obs_r_h = r_h - r_h(-1) + me_r_h;
// domestic inflation
obs_pi_h = pi_h + me_pi_h;
// home gdp growth
obs_y_h = y_h - y_h(-1) + me_y_h;
// nominal exchange rate change
obs_e = e - e(-1) + me_e;
// real exchange rate change
obs_q = q - q(-1) + me_q;
// nominal foreign interest rate
obs_r_star = r_star - r_star(-1) + me_r_star;
// foreign gdp growth
obs_y_star = y_star - y_star(-1) + me_y_star;
// import inflation
obs_pi_f = pi_f + me_pi_f;
// SOE CPI inflation
obs_pi = pi + me_pi;
// foreign CPI inflation
obs_pi_star = pi_star + me_pi_star;``````

I realize this is assuming no steady state growth. How should I define the measurement equations if I want to incorporate steady state GDP and interest rate growth? I tried adding a_h and and a_star to the GDP equations and (1/beta)-1 to the interest rate equations and received this error which I assume is a stationarity issue. My code is attached.

``````??? Error using ==> lnsrch1 at 53
Some element of Newton direction isn't finite. Jacobian
maybe singular or there is a problem with initial
values``````

mymodel3.mod (10.6 KB)

Hi, could you please also post the data-file.

data file is attached. thanks!
mymodeldata7.mat.zip (8.94 KB)

Hi,
this is not a stationarity issue (although one may appear later in the estimation as the exchange rates are non-stationary and you may need to use the diffuse Kalman filter, i.e. lik_init=2). Your problem derives from a steady state issue. In fact, your procedure for including the constant terms in the observation equations is correct. What you forgot is to adapt the steady state accordingly. If there is a (1/beta)-1 in the interest equation, its steady state is not 0, but (1/beta)-1. Hence, you need

```initval; obs_r_h = (1/beta)-1; end; ```
and accordingly for the other non-demeaned observations. Unfortunately Dynare is not very explicit with this error message in linear models.

Also note that your data currently has the form that a 1% shock is 0.01, but the mean of your standard deviations are 1, i.e. 100%. Hence, your standard deviations may be off by the factor 100. Smets/Wouters for example multiply their data by 100 for this reason.

jpfeifer, thanks for your comments! I have modified the code by multiplying the steady states by 100 and adding the steady state values. I have also modified the datafile. I now only demean the exchange rate and CPI variables, since I don’t specify a steady state for them. Can you please take another look? When I run the program, csminwel and the matlab solver (mode_compute=2) no longer work for computing the posterior mode…does this mean I have to use mode_compute=6?

Also, since I defined the rate of growth (rho_a_h and rho_a_star) for the technology processes to be 0.9, this would always cause GDP growth to decrease, no? Is this reasonable? If I change them to 1.1, for example, this causes the system to explode (I’m assuming it’s no longer stationary).

Thanks again!
Archive 4.zip (11.5 KB)

Hi,
if you want to follow Smets/Wouters, you should not only multiply the SS-values with 100 but also your data series (you can compare your observation equations and the data to the SW data at the AER homepage).
If I understand your model correctly, it does not model non-stationary technology growth. Hence, your model prediction is that the growth rates for output are 0 on average (note that the mean of the growth rates derives from a deterministic trend or a drift in a random walk process). Your rho_a_h only predicts how fast temporary deviations from SS decay to 0. Hence, I think the correct way for output would be to demean the growth rates in the data and to not add a constant term in the observation equation.

Regarding the interest rate, your approach seems also wrong/unnecessary. You cannot define a separate steady state parameter r_star_SS as it is a perfect function of beta. When you define the parameter as

this sets r_star_SS to the value on the RHS, but does not preserve the functional relationship. The correct way would be to define an expression:

Finally, the way you estimate your model, I am not sure if using non-demeaned data adds much to your model, because you hardly use the information contained in these means. To see this, consider the interest rate. As the unconditional mean is a function of beta , the mean will only be informative about beta. But you do not estimate beta, but rather calibrate it. Hence, you could also use demeaned data without much loss of information. Note also that using the mean with a calibrated beta might give rise to an additional problem. Your calibrated beta might predict a different mean than is present in the data. In this case you will force your model to match this behavior by having a shock series for interest rates that in non-mean-zero (usually this is not desirable). The result is that many people will calibrate beta to match the real interest rate in the data. But if you do this, you can use demeaned data for the interest rate as you already imposed the information contained in the mean in this calibration.

Thanks for the comments. Basically you are suggesting that I go back to the original mod file where I demeaned all the variables, right? I followed Smets and Wouters data and increased the observables by a factor of 100. I also included the lik_init=2 command that you mentioned earlier, since there is a unit root in the nominal exchange rate variables. However, mode_compute=4 and mode_compute=1 no longer work. I’m going to try mode_compute=6, but I was hoping you can take another look at the model. I have also included the data this time in xls format, in case the problem is with the data. Thank you so much!

edit: mode_compute=6 is not giving sensible results
Archive 5.zip (65.8 KB)

Sometimes I also get the following error when trying to compute Bayesian IRF’s.

??? Error using ==> area at 48
X must be same length as Y.

Error in ==> PosteriorIRF at 425
h1 = area(1:options_.irf,HPDIRF(:,2,j,i));

Error in ==> dynare_estimation_1 at 1078
PosteriorIRF(‘posterior’);

Error in ==> dynare_estimation at 62
dynare_estimation_1(var_list,varargin{:});

Error in ==> mymodel2 at 356
dynare_estimation(var_list_);

Error in ==> dynare at 132
evalin(‘base’,fname) ;

Yes, I would recommend using demeaned data if you do not want to estimate beta.

A few quick thoughts. For larger models it is not uncommon to have problems with computing the mode. The problem is the size of the measurement error, which you do not estimate. When you do not multiply the data with 100, the calibrated standard deviation of 1 is huge. Hence the fit of the model is a lot better (as the part accounted for by the measurement error does not have to be explained). In contrast, when multiplying the data by 100, a variance of 1 may not be sufficient.
It is correct that Taylor rules specify the interest rate at quarterly rates, hence they are divided by 4. Remember that the same holds true for inflation. If inflation is computed by the log difference of the GDP-deflator it is already in quarterly rates. However, if you have inflation at annual rates, this is wrong.

Regarding the Posterior-IRFS. Have you tried, if the problem occurs with the newest Dynare snapshot? If you can still replicate the problem, you should open a new post.

Thanks for the reply, some quick questions:

How large should the measurement errors be?
Since I’m taking log first differences of all the variables (including interest rates), doesn’t the division by 4 cancel out? I noticed Smets and Wouters use a measurement equation of the form rOBS = r + r_SS, and not rOBS = r-r(-1), as I have been using.

I am downloading and installing the latest snapshot so hopefully the Bayesian IRF problem will go away. Thanks again

If you want to include measurement error (there is a debate whether one should do this), a common way is to bound them to e.g. 25% of the standard deviation of the observed series. You would have to compute the upper bound of the prior accordingly by hand.

The problem with the division by 4 is that all observed variables have to be in quarterly values. The trouble arises if one series is divided by 4 and another one not. To see this, consider the inflation and the interest rate. The annual variance of inflation is larger than the quarterly one. If you use annual inflation but quarterly interest rates, the model will have trouble to explain the low variance in the interest rate and the high variance in inflation. However, you are right that if you use growth rates, the division by 4 should cancel out.

Regarding SW, interest rates are often considered to be stationary in the DSGE-literature. Hence, there is no theoretical reason to use first differences. In contrast to other non-stationary variables, you could also use the SW observation equation.

Thanks once again. I have introduced priors for the measurement errors for 4 of the variables, but for two of the observables (obs_pi_h and obs_pi_f) the posterior mean always seemed to reach the upper limits of the bounds I have set. What does this mean?

Unfortunately, it is very common that the measurement error hits the upper bound. The current view of the profession seems to be that it depends on you philosophy if this is OK. One camp argues that you should never introduce measurement error just to avoid stochastic singularity. You should rather add structural shocks and force them to explain the data instead of assuming the true data looks different due to measurement error. Because if you just assume enough measurement error, every model can explain the data. The other camp (I think e.g. Del Negro/Schorfheide) argue that introducing measurement error might be a way to alleviate model misspecificiation and is hence appropriate.

I was not able to get reasonable results with any combination of measurement errors and bounds, and I’ve decided to take them out entirely. To handle the singularity problem, I dropped the nominal exchange rate, import inflation, and domestic inflation observables. I added a terms of trade observable in their place. So now I have 8 shocks and 8 observables. I also restricted the sample data to more recent observations hoping that this would improve the results.

When I try using diffuse_filter, I get the message

??? Error using ==> univariate_diffuse_kalman_filter at 102
univariate_diffuse_kalman_filter:: There isn’t enough information to
estimate the initial conditions of the nonstationary variables

so I use lik_init=2 instead to handle the unit root in the nominal exchange rate.

However, neither mode_compute=1 nor mode_compute=4 is working, and the check_plots are showing all kinds of cliffs. I am desperately trying to obtain decent results, would really appreciate some help!! Thanks!
Archive 7.zip (6.86 KB)