I’m trying to interpret the meaning of the shocks when they are written in terms of standard errors. Suppose the equations for the endogenous variables are:

Y_GAP = output gap in % terms = log(real GDP) - 100* log(potential GDP)

There is a separate equation that defines how output gap relates to its lagged values and other endogenous variables with a residual term. So it is

Y_GAP = linear combination of other variables + ERROR_Y

var ERROR_Y stderr; 0.1

If I set the standard error on this error term to be 0.1, would that mean that we are imposing a shock of 10 percent i.e. the output gap rises by 10 percent? What if I want the output gap to fall by 10 percent, how would I edit the above line of code to reverse the direction of shock so that it’s reflected in the impulse response functions? I was thinking that since the standard errors measure the variability around the mean, we can’t obviously have negative standard errors to reflect the negative output gap.

Similarly, if there is another equation that defines:

inflation_rate = some_variables + ERROR_INF

var ERROR_INF stderr; 0.1

Again, if I set the standard error on this error term to be 0.1, would that mean that we letting inflation rate to rise10 percent ? What if I want the inflation rate to fall by 10 percent, how would I edit the above line of code to reverse the direction of shock?

@jpfeifer Just commenting again, in the hopes that you or someone else might see and can answer the question. Thanks a lot

Thank you so much @jpfeifer . Follow up questions to make sure I understand correctly:

Let’s say that we are considering the equation that defines the inflation rate for US:

inflation_rate_US = some_variables + ERROR_INF_US

Now, the units are in % and not log deviations. Then, if I want to impose a shock wherein the inflation rate of US rises by 10 percent, then I would write

var ERROR_INF stderr; 0.1

Is that correct?

Secondly, if I want the inflation rate to fall by 10 %, then I would first modify the equation to as follows:

inflation_rate_US = some_variables - ERROR_INF_US

Then, var ERROR_INF stderr; 0.1

All other equations for other countries and variables will be unchanged. Am I right?

Finally, if I were examine the effect of a shock in which GDP grows by 2 percent (for example), how will I translate from output gap to GDP growth rate in defining impulse responses?

To explain, if var ERROR_Y stderr; 0.02 means we impose a 2% upward shock on the output gap, how and what should I modify if I want to impose a 2 % GDP growth rate?

Sorry for so many questions. I really thank you for taking the time to answer them.

I was plotting a few graphs today and two more related questions came to my mind.

- To clarify, is the standard error for a “1 percent” increase in the federal funds rate (where the equation for the federal funds rate is defined by some form of Taylor rule) is different from the standard error if we want the federal funds rate by 1 percentage “point” ? The example of the latter could be that the fed funds rate rises from 2 to 3 percent ( 1 percent point or 100 bps increase). So, the standard error would be 0.5. But 1 percent increase would mean that the rate rises from 2 to 2.01 percent. So in this case the standard error would be 0.01.

Is that correct?

- Is there a way to translate from output gap to GDP growth rate other than trial and error? For instance, if I say that that the output gap rises by 20 percent,

i.e. var ERROR_Y stderr; 0.2,

where Y_GAP = linear combination of other variables + ERROR_Y

then by how much will GDP grow?

I don’t understand what you mean by 2.

By “policy rule”, are you referring to the equation for output gap in this case?

Y_GAP = output gap in % terms = log(real GDP) - 100* log(potential GDP)

More specifically, the equation for output gap is defined as equation 13 on page 10 here.

The policy rules represent the solution of the model. They will be displayed by `stoch_simul`

.

I wanted to follow up on this - the federal funds rate are measured in percentage points, and not % deviation. So, in defining the shocks, if I set

var ERROR_Y stderr; 0.2

does that mean a 20 percent “point” increase in the fed funds rate, or a 20 percent increase? Those two are very distinct, so I wanted to clarify.

Secondly, when I plot the IRFs, the x-axes measures time in terms of the number of quarters. I observe that the black lines (which the IRFs) goes to 0.06 in 6 quarters. Does this mean that the fed funds rate has gone up by 6 percent? Or 6 percent points? Or something else?

Given that the gross interest rate usually appearing is roughly 1, the distinction between percentage deviations and percentage points is mostly academic.

I still don’t understand, sorry. Could you explain in detail, or provide some references/websites where I can get more clarification on this? So, it seems that you’re implying that whether I say the effect on the federal funds rate is 6 percent or 6 percent point, the two are the same? The number 6 is a number on the verticle axis of the IRF graph.

Without telling us how you defined your variables, it is impossible to tell. My point was the following. Most often, people have gross rates 1+i in their model, not net rates i. So 4% interest are 1.04. A log deviation from steady state will then be roughly:

log(1+i)-log(1+\bar i)\approx log(1+i)-log(1)=i-0=i

Thus, the log of a gross rate corresponds to the interest rate in percentage points. In that case, percentage deviations from steady state and percentage points coincide. Obviously, it will be different if you are taking logs of net rates. Then you would have percent of percentage points, which does not make sense.

Dynare by default linearizes, it does not log-linearize. So all units are preserved. If you have a variable that’s measured in percentage points and you additively shock it directly, then the shock will be in percentage points as well.

Ok thanks for confirming. What do you mean by additively shock it? Is it the same as saying that if we impose a standard error = 0.1, then we are imposing a shock of 10 percent points as well?

If a variable is in absolute deviations and we do the same, will my aforementioned conclusion still hold true?

I mean that typically you have processes like

i_t=\rho\pi_t+\varepsilon_t

If i_t is in percentage points, then a shock \varepsilon_t of 0.1 will increase i_t by 10 percentage points.

It would be different if you had e.g.

i_t=\pi_t^\rho\times e^{\varepsilon_t}

I see, thanks that helps. I wonder how we interpret the effect displayed in the IRF graphs. For example, if we define interest rates in percent points, mention the standard error of 0.1, which is equivalent to imposing a 10 percent point shock, and plot impulse response function graphs. Let’s say that after 3 quarters, the impulse response curve shows a value of 0.5. Would that mean that the federal funds rate have risen by 0.5 percent? or 0.5 percent point? 5 percent point?

Finally, would the horizontal line at 0 indicate the long run path of the variable?

I see. So, when we define the variable in percent points, we know that the y-axis in an IRF is measured in percent points. This means that a value of 0.7 on the y-axis implies that the variable increases by 0.7 percent points from its long run forecast path. Is that correct?