How does Dynare model expectations of t+n variables?

Hi all, I’m trying to code manually using gensys.m a model with expectations of output at t+2.

In the system of equations \Gamma_0 x_t = \Gamma_1 x_{t-1} + \Psi\varepsilon_t + \Pi\eta_t I tried to create an auxiliary variable for output x^{\text{augmented }}_t and set x_t to be equal to the lag of my x^{\text{augmented}}_t but that did not work. So how does dynare do it?

To take an example, I’ll use the code of Herbst and Schorfheide:

GAM0(eq_1,y_t) = 1;
GAM0(eq_1,R_t) = 1/tau;
GAM0(eq_1,g_t) = -(1-rho_g);
GAM0(eq_1,z_t) = -rho_z/tau;
GAM0(eq_1, Ey_t1) = -1;
GAM0(eq_1, Epi_t1) = -1/tau;

GAM0(eq_2,y_t) = -kappa;
GAM0(eq_2,pi_t) = 1;
GAM0(eq_2,g_t) = kappa;
GAM0(eq_2, Epi_t1) = -bet;

GAM0(eq_3,y_t) = -(1-rho_R)*psi2;
GAM0(eq_3,pi_t) = -(1-rho_R)*psi1;
GAM0(eq_3,R_t) = 1;
GAM0(eq_3,g_t) = (1-rho_R)*psi2;
GAM1(eq_3,R_t) = rho_R;
PSI(eq_3,R_sh) = 1;

GAM0(eq_4,y1_t) = 1;
GAM1(eq_4,y_t) = 1;

GAM0(eq_5,g_t) = 1;
GAM1(eq_5, g_t) = rho_g;
PSI(eq_5, g_sh) = 1;

GAM0(eq_6,z_t) = 1;
GAM1(eq_6, z_t) = rho_z;
PSI(eq_6, z_sh) = 1;

GAM0(eq_7, y_t) = 1;
GAM1(eq_7, Ey_t1) = 1;
PPI(eq_7, ey_sh) = 1;

GAM0(eq_8, pi_t) = 1;
GAM1(eq_8, Epi_t1) = 1;
PPI(eq_8, epi_sh) = 1;

If I wanted to include a variable that stands for the expectation at time t of output at time t, I would create an auxiliary variable for lagged output, call it x_1t, and use the expectation of that, which would require the following additional code:

GAM0(eq_9, x_1t) = 1;
GAM1(eq_9, Ex_1t) = 1;
PPI(eq_9, ex_1tsh) = 1;

GAM1(e1_10,x_t) = 1;

I do not know how to reverse this, however, and simulate the expectation at time-t of output at time t+2

Usually, the other way round. You define a variable that is the expected value E_t(y_t+1) and then compute the expectation of that variable one period in the future: E_t(E_{t+1}(y_{t+2}))=E_t(y_{t+2})

Let us call x_t my E_ty_{t+1}. If I understand your explanation correctly, if I were to take E_tx_{t+1} I would be computing, implicitly, E_ty_{t+2}? Why would that not simply be computing E_t(E_tx_{t+1})=E_tx_{t+1} since, by the law of iterated expectations, E_t(E_tx_{t+1})=E_tx_{t+1}?

Your mistake in the formula is that x_{t+1}=E_{t+1}x_{t+2}. You plugged in for x_t again.

1 Like