The potential issue with large shocks?

Dear all,

I recently noticed that in my dynare model when the size of the shock is large, the performance of the simulation is poor.
It is intuitive as the dynare is doing approximation aroundthe steady state (or stochastic steady state with time-varying volatility of the shock, which is my case).

I wonder if there is a general guideline on what the proper size of a shock is? Is std(x) = 1.0 possible? Are there any examples with large shocks?

In my model there is sth like exp(exp(x)) – log normal productivity shock with time-varying volatility. So there is a lot of curvature which I guess also matter for large shocks?

Thanks!

If your model is in logs, then a standard deviation of 1 means 100 percent, which is extreme.

In the model there is y = exp(exp(x-c)) where x is the shock and c is a positive constant. In this case (espcially when c is large) 1 std of x does not correspond to 100%, but it still involves large numerical errors?

I think the general question is whether there is an approximation issue using large shocks, conditional on that the real size of the shock is not large (as in exp(exp(x)).

You cannot generally judge the accuracy of a solution without actually computing the Euler error or some other measure. One can only speculate. However, given that fact that you are doing a polynomial approximation to a double-exponential function I would indeed guess that the simulation performs poorly. That formulation strikes me as a crazy one to approximate.

Thank you for your reply. But isn’t that a standard setup for stochastic volatility? Consider x as the shock to the log of volaility of log(A), and exp(x) would be the volatility of the log(A) (e.g. productivity shock). Then through double-expentially you get the level of productivity?

That’s not a good idea. See point 1 at:

Thank you for the reference. In that metrica paper, the vol is AR(1), not the log-vol. And there is a possibility that vol could go negative.

But in another related exercise Caldara (2012 et al) RED, they modeled the log-vol being AR(1). So in general you would prefer the Basu_Bundick over Caldara for modelling shocks to vol, to avoid the double exponential?

The log-log one has very undesirable theoretical properties as outlined in the Andreasen paper. I would go for the level specification. With respect to volatility going negative:

  1. You are doing a polynomial approximation. There is no way to introduce a bound regardless of the actual specification.
  2. Conceptually, even if the standard deviation becomes negative, the variance is still positive.

Thanks! I understand that negative vol would not be an issue for computation given it is multiplied by another shock.

With respect to 1, I saw that in Andreasen EL paper there are three suggested way to fixe it. But some involves using non-negative shocks:

In genearl, is there a way to specify non-negative shocks in dynare, apart from assuming it is log-normal? Thanks!

Again, polynomial approximations do not allow for boundedness, regardless of the initial specification. Take exp(x) around 0. Its approximation at first order is 1+x, which is obviously unbounded.