Difficulty in Solving a Model with Large Shocks: Seeking Advice

Dear all,

I am having difficulties in solving a model that includes large shocks. Specifically, when I increase the standard deviation of shock e (denoted by std_e), the model fails to find a steady state and returns an error message:

“Impossible to find the steady state (the sum of square residuals of the static equations is 63.5725). Either the model doesn’t have a steady state, there are an infinity of steady states, or the guess values are too far from the solution.”

I have found that I can solve the model if std_e is small (e.g., std_e=0.5), but when I increase it gradually, the model explodes at a certain threshold (e.g., std_e=1.0).

I understand that Dynare performs perturbations at the steady state, and that increasing std_e should not affect the steady state, and have checked that along the path numerically. Therefore, changing the initial values would not help in this case.

I am seeking advice on other possible ways to overcome this issue and obtain a correct solution for the model. For context, my ultimate goal is to model a log-normal shock x=exp(e) with a mean close to zero, but this requires a shock e with a very negative mean and a relatively large standard deviation, which seems to be causing the problem.


Some related discussion here


I don’t understand the problem. Why does your shock standard deviation affect the steady state at all?

Thank you for the reply.

Yes, you are right, that should not affect the steady state. I had not initially solved the issue because the std was included in the initial steady state part (was calculating stochastic steady state). Now I remove it and the model can be solved for large std.

I would like to seek your advice on a related question:

  1. Is it always the case that models with large std shocks have poor numerical performance? I am wondering whether approximating a log-normal shock x = exp (e ) with a large std std_e is a reasonable approach to generate skewness in x=exp(e)? Thanks!

It’s about the combination of shock size and solution technique. Local approximations tend to perform poorly if large shocks move the simulation far off from the steady state.

Regarding skewness: yes, that should work to generate skewness in the exogenous process. However, empirical skewness may be hard to generate.

Thank you. The first point seems to be a general issue, whenever the simulation is away from the SS, the numerical solution is poor as the decision rules are not necessarily reliable. Is it also the reason why many papers do short simulations?

Would you please elaborate a bit on what do you mean by empirical skewness may be hard to generate?

  1. That’s the reason why people opt for different solution techniques in this case.
  2. There are papers that find that to generate realistic skewness and kurtosis in the data, you may need some Markov Switching, not just a simple lognormal process. But I would always first try the latter.