Different declarations of variables and parameters

Dear Johannes Pfeifer,

The assumption advanced here is that there is a negative shock (-1%) lasting for six periods to the natural interest rate. Furthermore, define beta (=0,995) = 1/(1+rn) = 1/(1+rho). Under s.s. with no inflation nor growth rn = rho = i = r = 0.5%. Thus, following the shock beta = 1.005, and rn = -0.5% for six periods and so forth…and then revert to their initial values.

Obviously these dynamics impact upon other parameters and variables in the model. This is what is specified in Gali5C2, which is based on your script replicating figure 5.3 in Gali (2015). Running this model leads to a closed and expected solution. The path of beta can be inferred from say the NKPC. Hence, my labeling these parameters as “variable parameters”.

I do have doubts, but what prevents one to consider shocks to structural parameters, ranging from the discount factor to for example price elasticities? All the parameters in the steady_state_model block are not constant.

Thank you and regards,
Jose

P.S Please, do question these hypotheses

If you add shocks to parameters, they are not parameters anymore and must not be declared as such. You need to clearly distinguish time-varying objects, their steady state values, and parameters.
What you describe indicates that beta is an exogenous variable that follows a specified process. When you linearize your model, you need to take this into account. Many objects in the model will simply depend on beta in steady state (which can be defined as a parameter if you want). In other places, the actual value of beta needs to appear.

Dear Johannes Pfeifer,

Thank you for taking the time to write such insightful comments.

1-Can I surmise from your text that you would not be offended (academically speaking of course) should I consider time varying “objects”, which are neither variables nor parameters?

To get a flavor, admit a looming war with dire prospects; then beta may likely fall, as there is the expectation of no tomorrow. Alternatively, admit a scenario of deflation and poor growth prospects. Under this hypothesis the representative agent may prefer to defer consumption to maximize utility: that is beta may increase over the unity. Once any of these constraints is somehow bypassed beta may return to a “normal” stable value.

Please, let me have your comment to the question (1)

Best regards,
Jose

Hi,
I don’t have a stake in what the proper naming is. But in Dynare, there are only two types of objects: parameters and variables. The former are time-invariant, the latter are time-varying. That is the only distinction you need to make.
Whether you call something a time-varying parameter in your paper does not matter. In that case, the naming more reflects that something that has been treated as fixed in the past now becomes time-varying.
But even in this case, I would prefer to treat the discount factor as fixed and introduce a preference shock that temporarily makes the time-preference deviate from its long-run value. That is the cleanest way.

Dear Johannes Pfeifer,

Thank you. Your comments are duly noted.

However, consider, say nominal rigidities in a NKM. These parameters have a particular bearing on the outcomes of a fiscal stimulus, namely if money financed. The argument can be shown by running the model with different calibrations. This is “the cleanest way”, I agree. The model may incorporate (or not) a ZLB scenario obtained through a shifter to the utility function.

1-Assume now that I want to consider that rigidities (as expected) are higher following a negative shock, but then fall as a result of a self fulfilling inflation scare. How can I capture these dynamics?

2-On the other hand what is the purpose of the concept of model local variables activated through the code # ?

Indeed, one has to keep in mind the log linearization process. But in a linear model wrt the variables, the steady state of most variables remains unchanged at zero, with the likely exception of money and prices wrt the terminal values. The variables log deviations will be different when using model local variables capturing the dynamics of the nominal rigidities.

I trust that I am not imposing on you and regards,
Jose

  1. I do not understand the experiment you are considering here.
  2. Model local variables are a place holder. Whenever they are encountered, Dynare substitutes in the defined expression. Thus, they obviate the need for writing down a given expression over and over again.
  3. Regarding linearization: as you linearize with respect to variables, it clearly matters what you consider as a variable and what as a parameter.

Dear Johannes Pfeifer,

Thank you. I may probably be “unorthodox” or plainly wrong.

1- I am simply considering a time varying nominal price rigidity (theta) in a NKM setting. The economy is hit by an exogenous negative shock. Monetary (independent) and political authorities respond with a money financed fiscal stimulus.

2- Noted. However, to simulate the time varying theta, I defined it as a varexo and had to place the associated parameters (lambda and k, following Gali’s (2015) notation) in the model block as model local variables.

3- Noted. Nevertheless, under this procedure the result is a closed solution, the rank condition is verified, and a perfect foresight solution is found.

Am I being stubborn, not to mention other words?

Best regards,
Jose Luis

Dear Johannes Pfeifer,

I confess that I had not researched the subject on time varying parameters. Google surprised me with the abundance of literature on this topic.

I attach a recent paper dealing with a NKM and the ZLB. You may want to consider reading the abstract.

Regards,
Jose

TimeVaryingParameters_2016.pdf (1.9 MB)

  1. Of course there is a large literature on the topic.
  2. Consider the pricing FOC of the NK model

\sum\limits_{k = 0}^\infty {{\theta ^k}{E_t}\left[ {{\Lambda _{t,t + k}}{Y_{t + k|t}}\frac{1}{{{P_{t + k}}}}\left( {P_t^* - \frac{\varepsilon }{{\varepsilon - 1}}M{C_{t + k|t}}{P_{t + k}}} \right)} \right]} = 0
Linearized you get the typical
\pi_t=\beta \pi_{t+1}+\kappa x_{t}
But if you now use
\sum\limits_{k = 0}^\infty {{\theta_\textcolor{red}{t} ^k}{E_t}\left[ {{\Lambda _{t,t + k}}{Y_{t + k|t}}\frac{1}{{{P_{t + k}}}}\left( {P_t^* - \frac{\varepsilon }{{\varepsilon - 1}}M{C_{t + k|t}}{P_{t + k}}} \right)} \right]} = 0
you will almost surely not get the same linear Phillips Curve. Put differently, you cannot start from the end and simply make the \kappa in the linearized equation time-varying.

Dear Johannes Pfeifer,

Thank you once more. You have been most helpful.

1- I was aware of time varying parameters in the field of finance, but not in macro in a NKM and ZLB framework…

2- You are right, k is not the same., thus the NKPC. However, upon linearizing and using Taylor approximations, if I did not make a mistake, I arrived at:
lambda = ((1-theta_t)*(1-beta * theta_t)/theta_t) * Constant.

3- Fiddling with beta is more complicated, though a perfect foresight solution was obtained. Is there any way not to start from the end?

Best regards,
Jose

  1. Again, anything that is time-varying is a variable for the purpose of computing a solution! This needs to be considered when doing a linearization/Taylor approximation, because you linearize in all variables.
lambda = ((1-theta_t)*(1-beta * theta_t)/theta_t) * Constant

is obviously not linear in \theta_t and thus cannot be a proper linearization.
3) You should not fiddle around, but rather properly derive the FOCs characterizing the problem from scratch. You cannot do shortcuts as the previous discussion clearly shows.

Dear Johannes Pfeifer,

Thank you. Back to the drawing board.

Regards,
Jose

Dear Johannes Pfeifer,

Regarding deterministic shocks, I am assuming that a well behaved model will reach a steady state in a finite time. However, when setting the number of simulations, say 15 vs. 100, there may be significant differences in the values of some the variables in the 15 intermediate periods, even though the initial and terminal conditions are identical.

The question is why? A perfect foresight solution was found in both cases.

Wishing you a joyous holiday season,
Jose

Because your force the system to be in its steady state in the last period. If the time in between is too small, the system will not reasonably have converged back to the steady state/terminal condition and thus affecting all previous periods.

Dear Johannes Pfeifer,

Thank you,
Jose