Treatment of sign() in dynare

Hi There,

The dynare manual suggests that the signuum function is supported internally for both MODEL_EXPRESSION and EXPRESSION.

I tried a simple example in which a model variable was set equal to sign() of a normal shock. Since the shock in SS = 0 therefore the variable has a SS value of zero but should equal -1 or 1 in simulation (or more precisely, should equal -1 or 1 if we used the exact function, though perhaps not necessarily some approximation to it). Inspecting the simulated path @ 1st and 2nd orders of approximation (the model crashes at third, presumably because the dynare++ binaries do not support sign()) revealed that the variable was constantly equal to 0, leading me to wonder how sign() is processed by dynare… (i.e. is the constancy due to some assumptions about the non-existant derivative of sign() in zero, is a generalised notion of derivative used, perhaps? or should signum simply not be used in stochastic models?)

If the former, then I guess more broadly, the question would be how dynare deals with derivatives of non-differentiable (in this case even discontinuous) functions (max and min would be further examples)?

Any info on top of what is in the manual would be much appreciated!

Thanks in advance and apologies if this has already been covered somewhere!
pawel

We are dealing with it:
github.com/DynareTeam/dynare/issues/355
The derivative of sign(x) w.r.t. x is 0 everywhere.

Thanks for this, much appreciated.
pawel

ps. At the risk of sounding pedantic, sign(x) is not differentiable in 0. At least if we use the standard definition of sign() (en.wikipedia.org/wiki/Derivative). Which is why I was referring to its generalized derivative in 0 (equal to twice Dirac’s delta).

Yes, you are right. But actually, you should not use sign at point 0 in stochastic models. In other points it is fine. Moreover, mathematically, you are correct that NaN would be the right output. However, this might lead to very undesirable numerical behavior. Thus, using something that works on the computer seems preferable to something that is correct in theory but generates practical issues.

In the future, there will be an explicit warning.

Not sure you need my two cents worth of advice, but if this thing can sometimes return results that are incorrect (in the kink), and otherwise returns something that is most likely useless (approximating sign() around any non-zero point is just a fancy way of writing plus or minus one, right?), then perhaps disabling it is the way foward (to be clear, I’m only talking about stochastic simulations)?

For one thing, it would eliminate posts like this from the forum…

Thanks again for your help, and I’ll heed the warning and “won’t try it at home” any further :wink:,
p

We always appreciate feedback.

Unfortunately, the preprocessor taking the derivatives does not know whether you use them for a Taylor approximation or for a Newton-method type deterministic simulation. In the former case, taking the derivative is either useless or evil.

For Newton method applications you might be looking for a variable y that satisfies a condition with sign(y), but you don’t know whether y at the optimum is positive or negative (you are not looking at the steady state). When going from plus to minus the Newton algorithm might run into trouble if there is an asymptote at 0.

That’s why simply disabling it is not a good way and we can only issue a warning if someone uses the derivative for stochastic simulations.