# Mixed variables in Dynare's LRE system

Normally:

Qx_{t}=Rx_{t-1}+S\varepsilon_{t},

where

x_{t}=[x_{1t} \; \mathbb{E}_{t}x_{2t+1}]^{\top}, \; x_{t-1}=[x_{1t-1} \; \mathbb{E}_{t-1}x_{2t}]^{\top}\in\mathbb{R}^{n_{x}=n_{x_{1}}+n_{x_{2}}}, \; \varepsilon_{t}\in\mathbb{R}^{n_{\varepsilon}}, \; Q, \; R\in\mathbb{R}^{n_{x}\times n_{x}}

and

S\in\mathbb{R}^{n_{x}\times n_{\varepsilon}}.

Specifically,

x_{1t}

are predetermined/backward looking/past/non-expectational variables and

\mathbb{E}_{t}x_{2t+1}

are non-predetermined/forward looking/future/expectational variables.

Mixed variables are variables appearing both as predetermined and non-predetermined variables (e.g. inflation in the hybrid New Keynesian Phillips curve, at Calvo (staggered) contracts - Wikipedia)

Now, matrices Q, R and S’ n_{x_{1}} rows are the model’s linear equations (i.e. log-linearised laws of motion). If Dynare follows such a construction then what equations characterise matrices Q, R and S’ last n_{x_{2}} rows?

–GENSYS–

Sims’ gensys algorithm, for instance, introduces expectational errors:

Qx_{t}=Rx_{t-1}+S\varepsilon_{t}+T\eta_{t},

where

T\in\mathbb{R}^{n_{x}\times n_{\eta}}

and

\eta_{t}\in\mathbb{R}^{n_{\eta}}

such that

\forall i\in n_{x_{2}}, \; x_{2it}-\mathbb{E}_{t-1}x_{2it}

be entries thereof, for expectational revision equations

x_{2it}-\mathbb{E}_{t-1}x_{2it}=x_{2it}-\mathbb{E}_{t-1}x_{2it}.

–SYNTHESIS–

How does Dynare account for mixed variables? What are the linear equations Dynare uses, with specific regard to mixed variables, in order to give rise to the linear rational expectations system from which the Blanchard and Kahn condition for a unique and stable solution is subsequently checked? The notation in Villemot’s work (https://www.dynare.org/wp-repo/dynarewp002.pdf), with respect to equations 7 and 8, is non-trivial: it is strongly deductive; equation 7 appears to convey that

x_{1t}=\mathbb{E}_{t-1}x_{2t},

but doubts remain.

Whether it be the case or not, a worked example or a clearer exposition, following an inductive approach, would be most of use.

Thanks.

More concretely, could a member of the Dynare development team (or anyone else) please provide an exposition of how a model characterised by one sole equation of the following form be cast into LRE format? The equation is a hybrid New Keynesian Phillips Curve:

\pi_{t}=(1-\psi)\beta\mathbb{E}_{t}\pi_{t+1}+\psi\pi _{t-1}+\kappa \varepsilon_{t};

parametrisation is obviously discretional and irrelevant.

What is the ultimate goal of your question? The paper you cite is not about casting the model equations into a LRE form like
Qx_t=Rx_{t-1+S\varepsilon_t}
Rather, it’s about how to compute the solution matrices based on the first order approximation to a nonlinear equation system. When taking implicit derivatives, you need the distinction you mention, but not before that.

The ultimate goal of my question is to understand what equations Dynare uses for mixed variables whenever expressing the first order approximations of given non-linear DSGE laws of motion as a transition LRE model equation, in state space form, before applying the Blanchard and Kahn algorithm.

–IN DEPTH–

First order approximations of non-linear DSGE laws of motion are LRE models precisely of that form, qua pre-solution transition equations:

Qx_{t}=Rx_{t-1}+S\varepsilon_{t}.

Such is none other than the transition equation of their state space representation before computing their LRE solution.

Equation 1 in Villemot’s work therefore becomes 7, which is therefrom cast into state space format, just before 8. Specifically, does

I^{-}\hat{y}^{-}_{t}=I^{+}\hat{y}^{+}_{t}

mean

x_{1t}=\mathbb{E}_{t-1}x_{2t}

in my notation above? What are the equations characterising those identity matrices?

–IN BRIEF–

My query could be satisfied if the above hybrid New Keynesian Phillips curve were cast into a transition LRE model equation, using my notation or Villemot’s, as long as equation and matrix details were clarified.

Take Villemot’s and drop hats for notational simplicity. Do

y^{-}_{t}=\pi_{t}, \; y^{-}_{t-1}=\pi_{t-1}, \; y^{+}_{t+1}=\mathbb{E}_{t}\pi_{t+1}

and

y^{+}_{t}=\mathbb{E}_{t-1}\pi_{t}

hold? If so then what are the respective matrices?

@sebastien Do you maybe know the answer?

In the case of the single-equation NKPC model, using the notations of my WP, one simply has:
y_t = y_t^+ = y_t^- = \pi_t

y_t^- and y_t^+ are simply subsets of the y_t vector. And since in the present case there is a single variable, and that variable appears with both a lead and a lag in the model, all three vectors are identical. In particular, there is no expectancy term involved in the definition of y_t^+.

Also note that there is no partition of the equations, we do not assume that some equations determine forward-looking variables while some others determine backward-looking variables. All equations are treated on the same footing.

Thanks for having intervened. Elucidation on some steps is still kindly required.

Going by your paper and your answer, is the “structural state space representation of (7)” with regard to such a hybrid NKPC then

\left[\begin{array}{cc}1 & -\left(1-\psi\right)\beta \\ 1 & 0\end{array}\right]\left[\begin{array}{c}\pi_{t} \\ \mathbb{E}_{t}\pi_{t+1}\end{array}\right]=\left[\begin{array}{cc}\psi & 0 \\ 0 & 1\end{array}\right]\left[\begin{array}{c}\pi_{t-1} \\ \mathbb{E}_{t-1}\pi_{t}\end{array}\right]+\left[\begin{array}{c}\kappa \\ 0\end{array}\right]\varepsilon_{t}?

On what grounds would you omit expectations?

To be sure, omitting hats and tildes for notational simplicity, as well as exogenous shocks, the “structural state space representation of (7)” from your paper is

\underbrace{\left[\begin{array}{cc}A^{0-} & A^{+} \\ I^{-} & 0\end{array}\right]}_D\left[\begin{array}{c}y^{-}_{t} \\ y^{+}_{t+1}\end{array}\right]=\underbrace{\left[\begin{array}{cc}-A^{-} & -A^{0+} \\ 0 & I^{+}\end{array}\right]}_E\left[\begin{array}{c}y^{-}_{t-1} \\ y^{+}_{t}\end{array}\right].

Do the two representations correspond? If not, could you please indicate how to transform the first to get to the second?

A key equation you seem to adduce, to the end of expressing said generic representation as

D\left[\begin{array}{c}I \\ g^{+}_{y}\end{array}\right]y^{-}_{t}=E\left[\begin{array}{c}I \\ g^{+}_{y}\end{array}\right]y^{-}_{t-1},

is

y^{+}_{t+1}=g^{+}_{y}y^{-}_{t},

which in our example would imply

\mathbb{E}_{t}\pi_{t+1}=g^{+}_{\pi}\pi_{t}.

Is it so? How would

g^{+}_{\pi}

then be found? If you will, such is precisely the equation one would be after.

You can omit expectations because at time t you already know the shocks (there is no more uncertainty). Therefore E_t-1 pie_t = pie_t

OK; so, to you,

g^{+}_{\pi}=1?

Waiting on @sebastien or @jpfeifer as well, to be sure.

Fundamentally what we are trying to solve is a matrix equation (corresponding to the unnumbered equation between (3) and (4) in my WP).

To solve that matrix equation, we reinterpret it in terms of a deterministic dynamic system, whose law of motion is precisely the policy function g_y that we are trying to solve for (see equation (4)). That dynamic system is not the same as the original rational expectations model, which I guess is the source of your confusion.

Thanks.

Is “the original rational expectations model” then the matrix equation for which you are trying to solve, “corresponding to the unnumbered equation between (3) and (4)”? Or is the latter a manipulation of the former in turn?

My need is to inductively visualise the algorithm which gets one from the “the original rational expectations model” at least to the matrix equation between 3 and 4 and its state space representation (the hNKPC above could do as an example).

Ideally, a visualisation of the subsequent manipulations could be also presented until the solution may be expressed as linear time invariant transition equation

x_{t}=Ax_{t-1}+B\varepsilon_{t}.

In essence, it would be an application of your thoroughly deductive exposition.

The derivation of the matrix equation is presented in the section preceding it. I’m sorry but I don’t know how to explain it better than what’s in the paper. If you don’t understand a specific step in the derivation, feel free to ask specific questions.

OK, thanks. How are D and E found in the hNKPC example above? What are they?

As explained in the paper, and since there is no static variable, the non-trivial elements of those matrices are the derivatives of the equation. Also, as described in section 4.1, there are two equivalent ways of constructing these matrices, depending on where you put the derivatives of the contemporaneous mixed variables.

The first possibility is:
D=\left(\begin{array}{cc} 0 & -(1-\psi)\beta \\ 1 & 0 \end{array}\right) ,\,\, E=\left(\begin{array}{cc} \psi & -1 \\ 0 & 1 \end{array}\right)

The second possibility is:
D=\left(\begin{array}{cc} 1 & -(1-\psi)\beta \\ 1 & 0 \end{array}\right) ,\,\, E=\left(\begin{array}{cc} \psi & 0 \\ 0 & 1 \end{array}\right)

If you consider the dynamic system:
D\left(\begin{array}{c}\pi_t \\ \pi_{t+1}\end{array}\right) = E\left(\begin{array}{c}\pi_{t-1} \\ \pi_t\end{array}\right)
you can see that the first line of the system corresponds to (the deterministic version of) the NKPC, while the second line is simply the trivial identity \pi_t=\pi_t.

Thank you: it’s getting clearer.

D and E in the second possibility (the first being substantially equal) are identical to what I wrote in my first message today (and in my opening post and response to @jpfeifer), from which I gather that

\mathbb{E}_{t}\pi_{t-1}=\pi_{t-1}

indeed, whereby “there is no expectancy term involved in the definition of y_t^+.” Correct? I nevertheless still fail to see why, to you (i.e. @ecastro’s answer aside).

Perhaps the answer is that you " reinterpret it in terms of a deterministic dynamic system" (@sebastien) after all, whereby “there is no more uncertainty” (@ecastro) indeed.

Thank you all.

I am not sure where the problem is. The realization of \pi at time t-1 is known at time t, i.e. it is contained in the information set of the expectations operator.

I believe I meant to write

\mathbb{E}_{t-1}\pi_{t}=\pi_{t}

in my last post, not

\mathbb{E}_{t}\pi_{t-1}=\pi_{t-1}.

@jpfeifer It is contained in the information set of the expectations operator at which time? Your answer seems to be that of @ecastro. While I see the logic to it, from a time t perspective, as opposed to an extra-temporal one (invoking @sebastien’s deterministic approach), I fail to see why Christopher Sims might have then made use of expectational errors or revisions, e.g.

\pi_{t}-\mathbb{E}_{t-1}\pi_{t}=\eta_{1t}.