# Leitemo 2008 Economics letters

Dear all,

I have red the paper: “Inflation-targeting rules: History-dependent or forward-looking?” by Kai Leitemo (2008), Economics letters.
I found that here are several mistakes in the paper, but still it is a very interesting one.
I did not manage to derive equation (9) of the paper. Since (9) is the most striking equation of the paper, it would be nice to know how to derive it!
I would appreciate any suggestions or hints.

Best regards,
Max

Equation (6) implies
{\mu _t} = - {\pi _t} + \theta {\mu _{t - 1}} + \left( {1 - \theta } \right){E_t}{\mu _{t + 1}}
while equation (7) implies
{x_t} = \frac{\gamma }{\lambda }{\mu _t}
Combining them yields
{x_t} = \frac{\gamma }{\lambda }\left( { - {\pi _t} + \theta {\mu _{t - 1}} + \left( {1 - \theta } \right){E_t}{\mu _{t + 1}}} \right)
Now notice that equation (7) implies
{\mu _t} = \frac{\lambda }{\mu }{x_t} \: \forall \: t
Use this to replace \mu_{t-1} and \mu_{t+1}:
\begin{align} {x_t} &= \frac{\gamma }{\lambda }\left( { - {\pi _t} + \theta {\mu _{t - 1}} + \left( {1 - \theta } \right){E_t}{\mu _{t + 1}}} \right) \hfill \\ &= \frac{\gamma }{\lambda }\left( {\theta \frac{\lambda }{\mu }{x_{t - 1}} + \left( {1 - \theta } \right){E_t}\frac{\lambda }{\mu }{x_{t + 1}}} \right) - \frac{\gamma }{\lambda }{\pi _t} \hfill \\ &= \theta {x_{t - 1}} + \left( {1 - \theta } \right){E_t}{x_{t + 1}} - \frac{\gamma }{\lambda }{\pi _t} \hfill \\ \end{align}

Thank you Professor Pfeifer.

Unfortunately that is not my question. Derivation of equation (8) is very straight forward, as you did above (simple manipulation of the FOCs).
My question was how to come up with equation (9)!
How to show up with the closed form solution of
x_t = \theta x_{t-1} + (1- \theta)E_t x_{t+1} - \frac{\gamma}{\lambda} \pi_t
namely
{x_t} = \rho {x_{t-1}} - \frac{\gamma}{\lambda (1-\eta) } \sum_{i=0}^\infty \delta_c^i E_t \pi_{t+i} .

I see that this problem is similar to deriving the closed form of a hybrid Phillips Curve. So I will try to look into Galí and Gertler (1999). Hopefully they provide guidance.

Best regards,
Max

Dear all,

I have found a solution. It seems there is a mistake in equation (9) of the Leitemo (2008) paper.
In my next post I will provide my solution.
Critique is welcome.

I corrected this post based on the hint below. Results now look consistent.
The derivations follow the steps provided by the appendix of:
“Notes on Estimating the Closed Form of the Hybrid New Phillips Curve”
Jordi Galí, Mark Gertler and J. David López-Salido
Preliminary draft, June 2001

To derive the closed form solution of
x_t = \theta x_{t-1} + (1-\theta)E_t x_{t+1} - \frac{\gamma}{\lambda} \pi_t
define \varepsilon_{t+1} = x_{t+1} -E_t x_{t+1} , substitute E_{t}x_{t+1} by x_{t+1} - \varepsilon_{t+1} and lag the equation by one period
x_{t-1} = \theta x_{t-2} + (1-\theta)(x_{t}-\varepsilon_t) - \frac{\gamma}{\lambda} \pi_{t-1}
Solve for x_t
x_t = \frac{1}{1-\theta} x_{t-1} - \frac{\theta}{1-\theta} x_{t-2} + \frac{\gamma}{\lambda (1-\theta)} \pi_{t-1} + \varepsilon_t
which is a second-order non-homogeneous difference equation in x.

Apply a lag-polynomial
(1- \frac{1}{1-\theta}L + \frac{\theta}{1-\theta}L^2)x_t = \frac{\gamma}{\lambda (1-\theta)} \pi_{t-1} + \varepsilon_t .
Note that 1- \frac{1}{1-\theta}L + \frac{\theta}{1-\theta}L^2 = (1-\delta_1 L)(1-\delta_2 L) holds where \delta_1,\delta_2 are the eigenvalues (see Gandolfo 2009, p.63-64).
Let \mid\delta_2\mid>1 and recall that (1-\delta_2 L) = -\delta_2 L (1-\frac{1}{\delta_2 L}) = -\delta_2 L (1-\frac{1}{\delta_2 }F) where F=L^{-1} is the forward operator.

Thus
\begin{align} (1-\delta_1 L)(1-\delta_2 L)x_t &= \frac{\gamma}{\lambda (1-\theta)} \pi_{t-1} + \varepsilon_t \\ -(1-\delta_1 L)\delta_2 L (1-\frac{1}{\delta_2 }F)x_t &= \frac{\gamma}{\lambda (1-\theta)} \pi_{t-1} + \varepsilon_t\\ (1-\delta_1 L)(1-\frac{1}{\delta_2 }F)x_t &= - \frac{\gamma}{\lambda (1-\theta)\delta_2} \pi_{t} - \frac{1}{\delta_2} \varepsilon_{t+1} \end{align} .

Recall that (1-\frac{1}{\delta_2 }F)^{-1} = \sum_{i=0}^\infty(\frac{1}{\delta_2})^i F^i.

Thus
\begin{align} (1-\delta_1 L)x_t &= - \frac{\gamma}{\lambda (1-\theta)\delta_2}\sum_{i=0}^\infty\left(\frac{1}{\delta_2}\right)^i \pi_{t+i} -\sum_{i=0}^\infty\left(\frac{1}{\delta_2}\right)^{i+1}\varepsilon_{t+1+i} \\x_t &= \delta_1 x_{t-1} - \frac{\gamma}{\lambda (1-\theta)\delta_2}\sum_{i=0}^\infty\left(\frac{1}{\delta_2}\right)^i \pi_{t+i} -\sum_{i=0}^\infty\left(\frac{1}{\delta_2}\right)^{i+1}\varepsilon_{t+1+i} \end{align}

Note that the eigenvalues are given by
\begin{align}\delta^2-\frac{1}{1-\theta}\delta + \frac{\theta}{1-\theta} = 0 \\ \delta_{1,2} &= \frac{\frac{1}{1-\theta}\pm\sqrt{(\frac{1}{1-\theta})^2 - 4 \frac{\theta}{1-\theta}}}{2} \\ &= \frac{1}{1-\theta} \frac{1\pm\sqrt{1 - 4 \theta(1-\theta)}}{2} \\ \delta_1& = \frac{1}{1-\theta} \frac{1-\sqrt{1 - 4 \theta(1-\theta)}}{2} \\ &= \frac{\eta}{1-\theta} \\ \delta_2& = \frac{1}{1-\theta} \frac{1+\sqrt{1 - 4 \theta(1-\theta)}}{2} \\ &= \frac{1-\eta}{1-\theta} = \frac{1}{\delta_c} \end{align}

where \eta = \frac{1-\sqrt{1 - 4 \theta(1-\theta)}}{2} and \delta_c = \frac{1-\theta}{1-\eta} as in Leitemo 2008, p. 268.

Applying the expectation operator E_t , noting that E_t\varepsilon_{t+1+i} = 0 \forall{} i \geq 0 and plugging in the eigenvalues yields
\begin{align} x_t &= \delta_1 x_{t-1} - \frac{\gamma}{\lambda (1-\theta)\delta_2}\sum_{i=0}^\infty\left(\frac{1}{\delta_2}\right)^i E_t \pi_{t+i} \\ & = \frac{\eta}{1-\theta} x_{t-1} - \frac{\gamma(1-\theta)}{\lambda (1-\theta)(1-\eta)}\sum_{i=0}^\infty \delta_c^i E_t \pi_{t+i} \\ & = \frac{\eta}{1-\theta} x_{t-1} - \frac{\gamma}{\lambda (1-\eta)}\sum_{i=0}^\infty \delta_c^i E_t \pi_{t+i} \\ & = \rho x_{t-1} - \frac{\gamma}{\lambda (1-\eta)}\sum_{i=0}^\infty \delta_c^i E_t \pi_{t+i} \end{align}
which is the closed form solution as in Leitemo (2008), equation (9); since \frac{\eta}{1-\theta} = \frac{\theta}{1-\eta}=\rho according to the result below.

Using Octave it turns out that numerically \frac{\eta}{1-\theta} and \frac{\theta}{1-\eta} are equal for \theta \in (0,1).

This is a bit strange for me at the moment.

It turns out that
\begin{align} (1-\eta)\eta &= \eta-\eta^2 \\ &= \frac{1-\sqrt{1-4\theta(1-\theta)}}{2}- \left(\frac{1-\sqrt{1-4\theta(1-\theta)}}{2}\right)^2 \\ &= \frac{2-2\sqrt{1-4\theta(1-\theta)}-1 +2\sqrt{1-4\theta(1-\theta)}-(1-4\theta(1-\theta))}{4} \\ &= \frac{1-(1-4\theta(1-\theta))}{4} = \frac{4\theta(1-\theta)}{4} \\ &= \theta(1-\theta) \end{align}

Thus \frac{\eta}{1-\theta}=\frac{\theta}{1-\eta}=\rho. It follows that my solution is equivalent to Leitemo’s solution and there seems to be no mistake. (Note that here are many mistakes in the appendix of Leitemo 2008 !)

Critique or hints w.r.t. my questions above are very welcome.

Best regards from Kiel University!

You would need to correctly apply the Law of Iterated Expectations, but my hunch is that the result is correct.

I would say that my proof violates the LIE, but I can not see how to fix this problem.

Have you tried replacing E_tx_{t+1} by x_{t+1}+\varepsilon_{t+1} where the latter is an expectational error (x_{t+1}-E_t(x_{t+1})) that has conditional expectations 0?
Alternatively, if you only care about verifying your solution, you can simply plug in and verify.

1 Like

Thanks a lot!
That was the final hint which I needed!!!
I corrected my derivations above using your advice.
Now the derivations look consistent to me.
Since E_t\varepsilon_{t+1+i} = E_t x_{t+1+i} - E_tE_{t+i}x_{t+1+i} = E_t x_{t+1+i} - E_tx_{t+1+i} = 0 there is no violation of the LIE left.