Static variables in Dynare

I know that Dynare solve the DSGE model in a state space form. In Blanchard-kahn(1980) we devide the endogenous variables into two different parts. State variables and non-predetermined variables. But in policy and transition functions Dynare gives equations structure for all endogenous variables.My question is that how Dynare include static variables into the Policy and transition functions?
In original Blanchard-kahn(1980) method we devide the variables into predetermined and non-predetermined variables and we do not include the static variables, but Dynare in it’s output report about the model in policy and transition functions shows equations relations for all endogenous variables including static variables.

In Blanchard-Kahn, you need to substitute out all static variables to apply the approach. Dynare essentially uses the Schur approach of Klein (2000)

Thank so much Professor.
You are really very strong in DSGE models.
Therefore dynare employs Klein(2000) method and schur decomposition.

If you want to know the details, please read
Solving rational expectations models at first order: what Dynare does, by Sébastien Villemot

Thank you so much professor. I think DSGE
Models are widespread and it takes many years to learn the details.

This excerpt from my lecture notes might help you regarding the guide of Villemot:

At period t, the model is given by:
\mathbb{E}_t \{ f\left(y_{t-1}^{*},y_t,y_{t+1}^{**},u_t\right) \}= 0
There are four types of endogenous variables:

  • static variables: those that appear only at the current period, but not at the past or future period. Their number is n_{static}\leq n.
  • purely backward (or predetermined) variables: those that appear only at the past period, possibly at the current period, but not at the future period. Their number is n_{pred}\leq n.
  • purely forward variables: those that appear only at the future period, possibly at the current period, but not at the past period. Their number is n_{fwrd}\leq n.
  • mixed variables: those that appear both at the past and future period, and possibly at the current period. Their number is n_{both}\leq n.

Note that each variables falls into one category, and we thus have the following identity: n=n_{static} + n_{pred} + n_{both} + n_{fwrd}. The state variables of the model, y_t^*, are the predetermined and mixed variables. Their number is n_{spred} = n_{pred} + n_{both}. We also define y_t^{**} as the mixed and forward variables, their number is n_{sfwrd} = n_{both} + n_{fwrd}. We also use Dynare’s specific ordering for variables (so-called DR-ordering), that is:
y_t = \begin{pmatrix} static\\ predetermined\\ mixed \\ forward \end{pmatrix}, y_t^* = \begin{pmatrix} predetermined\\ mixed\end{pmatrix}, and y_t^{**} = \begin{pmatrix} mixed \\ forward \end{pmatrix}

Then a dynamic equilibrium solution for this model class is equivalent to finding a function g, called policy-function or transition equation, for all endogenous variables:
y_t = g(y_{t-1}^*,u_t,\sigma)

Here it is useful to distinguish g for the different types of variables, i.e.
y_t^* = g^{*}(y_{t-1}^*,u_t,\sigma)
y_t^{**} = g^{**}(y_{t-1}^*,u_t,\sigma)
y_{t+1}^{**} = g^{**}(y_{t}^*,u_{t+1},\sigma) = g^{**}(g^{*}(y_{t-1}^*,u_t,\sigma),u_{t+1},\sigma).

The original model can then be written as a function of predetermined variables, current and future shocks, and the perturbation parameter:
F(y_{t-1}^*,u_t,\sigma,u_{t+1}) := f(y_{t-1}^*, g(y_{t-1}^*,u_t,\sigma), g^{**}(g^{*}(y_{t-1}^*,u_t,\sigma), u_{t+1}, \sigma), u_{t})
and you apply the perturbation technique on these functions.

First-order approximation

The first-order Taylor expansion of the i-th equation of F around \bar{r}=(\bar{x},0,0,0) is written in tensor notation as
\left[F(r)\right]^{i} \approx \left[F(\bar{r})\right]^i + \left[F_{x}\right]^i_{\alpha_1} \left[\hat{x}\right]^{\alpha_1} + \left[F_{u}\right]^i_{\beta_1} \left[u\right]^{\beta_1} + \left[F_\sigma\right]^i \sigma + \left[F_{u_{+}}\right]^i_{\delta_1} \left[u_{+}\right]^{\delta_1}
Taking the conditional expectation and setting it to zero, yields
\left[F(\bar{r})\right]^i + \left[F_{x}\right]^i_{\alpha_1} \left[\hat{x}\right]^{\alpha_1} + \left[F_{u}\right]^i_{\beta_1} \left[u\right]^{\beta_1} + \left[F_\sigma\right]^i \sigma + \left[F_{u_{+}}\right]^i_{\delta_1} \left[\Sigma^{(1)}\right]^{\delta_1}\sigma = 0
where \left[F(\bar{r})\right]^i =0 and \left[\Sigma^{(1)}\right]^{\delta_1} is the \delta_1 entry of \Sigma^{(1)}=E_t\{u_{t+1}\}=0. Note that this equation needs to be satisfied for any value of \hat{x}, u and \sigma. Therefore, the necessary and sufficient conditions to recover the first-order partial derivatives of g with respect to x, u and \sigma can be retrieved from
\left[F_{x}\right]^i_{\alpha_1} = 0, \qquad \left[F_{u}\right]^i_{\beta_1} = 0, \qquad \left[F_{\sigma}\right]^i = 0

The computation is done in sequence, starting with g_x, then computing g_u, and lastly g_\sigma. Note that the first-order approximation entails solving a matrix polynomial equation instead of a linear one (as in higher orders) for which several algorithms exist in the literature (under the keyword solving rational expectations models). Also, at first-order only the first moment of future shocks enters the equations, which is assumed to be zero.

Recovering g_{x}

The coefficients of g_{x} are retrieved from
{[z_{x}]}_{\alpha_1} = \begin{bmatrix} {[I]}_{\alpha_1}\\ [g_{x}]_{\alpha_1}\\ {[g^{**}_{x}]}_{\rho_1}{[g^{*}_{x}]}^{\rho_1}_{\alpha_1}\\ 0 \end{bmatrix}
{[F_{x}]}^i_{\alpha_1} = {[f_{z}]}^i_{\gamma_1} {[z_{x}]}^{\gamma_1}_{\alpha_1} = {[f_{y_{-}^*}]}^i_{\alpha_1} + {[f_{y_0}]}^i_{\rho^0_1} {[g_{x}]}^{\rho^0_1}_{\alpha_1} + {[f_{y_{+}^{**}}]}^i_{\rho^{+}_1} {[g^{**}_{x}]}^{\rho^{+}_1}_{\rho_1} {[g^{*}_{x}]}^{\rho_1}_{\alpha_1} = 0
Tensor unfolding yields the corresponding matrix representation:
z_{x} = \begin{pmatrix} I\\ g_{x}\\ g^{**}_{x} g^{*}_{x}\\ 0 \end{pmatrix}
F_{x} = f_{x} z_{x} = f_{y_{-}^*} + f_{y_0} g_{x} + f_{y_{+}^{**}} g^{**}_{x} g^{*}_{x} = A g_{x} + f_{y_{-}^*} = 0
A = f_{y_0} + \begin{pmatrix} \underbrace{0}_{n\times n_{static}} &\vdots& \underbrace{f_{y^{**}_{+}} \cdot g^{**}_{x}}_{n \times n_{spred}} &\vdots& \underbrace{0}_{n\times n_{frwd}} \end{pmatrix} is the important perturbation matrix.

Solving this system boils down to finding a solution to linearized rational expectations models, for which several algorithms have been proposed. The main idea is to first transform the system such that static endogenous variables no longer appear via a QR decomposition of the submatrix of f_{y_0} corresponding to columns for static endogenous variables, as derivatives with respect to static variables only appear in f_{y_{0}} and g_{y^*}. Then the core of the algorithm boils down to a generalized Schur decomposition (also known as the QZ decomposition) and a specific reordering such that stable generalized eigenvalues come first. After imposing two conditions: first, a squareness condition (the so-called Blanchard and Kahn (1980) order condition) and, second, a non-singularity condition (the so-called Blanchard and Kahn (1980 rank condition), we are able to find a stable and unique solution for the coefficients in g_x that correspond to predetermined, mixed and forward variables. Undoing the QR decomposition yields an invertible linear system to retrieve the coefficients corresponding to static variables. Experience shows that this algorithm is more generic and more efficient, especially for large models, than other algorithms proposed in the literature.

Recovering g_{u}

The coefficients of g_{u} are retrieved from
{[z_{u}]}_{\beta_1} = \begin{bmatrix} 0\\ [g_{u}]_{\beta_1}\\ {[g^{**}_{x}]}_{\delta_1}{[g^{*}_{u}]}^{\delta_1}_{\beta_1}\\ {[I]}_{\beta_1} \end{bmatrix}
{[F_{u}]}^i_{\beta_1} = {[f_{z}]}^i_{\gamma_1} {[z_{u}]}^{\gamma_1}_{\beta_1} = {[f_{y_0}]}^i_{\rho^0_1} {[g_{u}]}^{\rho^0_1}_{\beta_1} + {[f_{y_{+}^{**}}]}^i_{\rho^{+}_1} {[g^{**}_{x}]}^{\rho^{+}_1}_{\rho_1} {[g^{*}_{u}]}^{\rho_1}_{\beta_1} + {[f_{u}]}^i_{\beta_1}= 0
Tensor unfolding yields the corresponding matrix system:
z_{u} = \begin{pmatrix} 0\\ g_{u}\\ g^{**}_{x} \cdot g^{*}_{u}\\ I \end{pmatrix}
F_{u} = f_{z} z_{u} = f_{y_0} g_{u} + f_{y_{+}^{**}} g^{**}_{x} g^{*}_{u} + f_{u}= A g_u + f_u = 0
As we already computed g_x, and hence A, taking the inverse of A yields g_u.

Recovering g_{\sigma}

The coefficients of g_{\sigma} are retrieved from
{[z_{\sigma}]} = \begin{bmatrix} 0\\ {[g_{\sigma}]}\\ {[g^{**}_{x}]}_{\rho_1} {[g^{*}_{\sigma}]}^{\rho_1} + {[g^{**}_{\sigma}]}\\ 0 \end{bmatrix}
{[F_{\sigma}]}^i = {[f_{z}]}^i_{\gamma_1} {[z_{\sigma}]}^{\gamma_1} = {[f_{y_0}]}^i_{\rho^0_1} {[g_{\sigma}]}^{\rho^0_1} + {[f_{y_{+}^{**}}]}^i_{\rho^{+}_1} {[g^{**}_{x}]}^{\rho^{+}_1}_{\rho_1} {[g^{*}_{\sigma}]}^{\rho_1} + {[f_{y_{+}^{**}}]}^i_{\rho^{+}_1} {[g^{**}_{\sigma}]}^{\rho^{+}_1} = 0
Tensor unfolding yields the corresponding matrix system:
z_{\sigma} = \begin{pmatrix} 0\\ g_{\sigma}\\ g^{**}_{x} g^{*}_{\sigma} + g^{**}_{\sigma} \\ 0 \end{pmatrix}
F_{\sigma} = f_{z} z_{\sigma} = f_{y_0} g_{\sigma} + f_{y_{+}^{**}} g^{**}_{x} g^{*}_{\sigma} + f_{y_{+}^{**}} g^{**}_{\sigma} = S g_{\sigma} = 0
S = A + \begin{pmatrix} \underbrace{0}_{n \times n_{static}}&\vdots & \underbrace{0}_{n \times n_{pred}} & \vdots & \underbrace{f_{y^{**}_{+}}}_{n \times n_{sfwrd}} \end{pmatrix}

As we already computed g_x, and hence S, taking the inverse of S yields
g_\sigma = 0

This is a manifestation of the certainty equivalence property of the first order approximation. Even though agents take the effect of future shocks into account when optimizing, in a linearization to the first-order the policy rules and transition equations do not depend on the size of the structural shocks. In this sense future uncertainty does not matter.

I did not understand anything from your notation.

No problem :wink:
Just have a look into the Villemot paper then. He uses a + superscript when I used ** and a - superscript when I used *.