Intuition behind 'pruning'

Dear all,

Could somebody please explain what ‘pruning’ does?

For example, I was not able to simulate a second order approximation of a model, but under ‘pruning’ it works. The manual says: “Discard higher order terms when iteratively computing simulations of the solution.”

Thank you very much :four_leaf_clover:

To get an intuition of the problem, consider the case of a quadratic first order autoregressive process (the reduced form solution of a DSGE approximated at second order has more terms):

y_t = \rho y_{t-1}^2

Starting from a given initial condition y_0 we have:

y_1 = \rho y_0^2,
y_2 = \rho^3 y_0^4,\ldots

So even if the model looks stationary because \rho is smaller than one in absolute value, the generated path may diverge depending on the initial condition (it will diverge if |y_0|>1). The dependence of the path on history is a common issue when dealing with nonlinear dynamic models. I do not have time to develop on this, but the things get worse here because of the approximation by perturbation of the original model. The pruning approach is an attempt to remove the explosiveness related to the powers on the initial condition, by discarding terms of order higher than two (in case of a second order approximation). The idea was introduced in a paper by Kim, Kim, Schaumburg and Sims. Suppose that the true model is (again, the reduced form solution of a DSGE approximated at second order has more terms, with innovations, squared innovations and cross products, but this is enough to sketch the principle):

y_t = \rho y_{t-1} + \gamma y_{t-1}^2

the pruning then consists in augmenting the dynamic with another state variable, defined by the linear part of the true dynamic:

z_t = \rho z_{t-1}

with z_0=y_0 (same initial condition). The model is then transformed into:

\begin{cases} y_t &= \rho y_{t-1} + \gamma z_{t-1}^2\\ z_t &= \rho z_{t-1} \end{cases}

Obviously the path generated by this system is different from the path generated by the initial model. But the system is stationary (ie it will not diverge) as long as the equation for the new variable z_t is stationary (ie |\rho|<1). The path will not depend on the initial condition. If you google pruning and DSGE you will find some papers attempting to find a rational for this approach.

In Dynare, the pruning approach is implemented for second and third order approximations. An alternative is simply to reduce the variance of the innovations (so that the paths never quit the stable region). Other alternatives, which I tend to prefer but are not yet implemented in Dynare, exist. For instance, see den Haan et alii.



Dear Stéphane

Thank you very much for your time and this fantastic explanation! I really appreciate your help.

Clearly, your post answers my question.