Levels versus Growth Rates

Hi,

I’ve noticed that many posts on this forum have the problem of collinearity and especially when people try to identify the levels of prices. I understand that business cycle models cannot exactly identify the price level from a theoretical point of view, but I don’t understand exactly how it creates collinearity given that one could set the initial level to say unity.

For example, take the standard Euler equation:

1=beta*(UC(+1)/UC)*(R/PI(+1))

Assuming that R is identified by a Taylor rule, the above Euler equation identifies the expected change in the price level PI(+1). If this is a simplified framework where STEADY_STATE(PI)=1, then the price level can be entered as an endogenous variable:

PI=P/P(-1)

So if STEADY_STATE(P)=1, then the model checks out with the steady state model block and it provides a solution and impulse response functions for a price level that is nice and smooth. However, the model_diagnostics shows the usual message that all the equations containing PI are subject to collinearity.

Why is this happening, what is the intuition and should we be worried? Thank you.

It is well-known that the price level is indeterminate. You can choose any initial level for P and get the same results. A corrolary of this is that the price level has a unit root. While inflation goes back to steady state, this does not apply to the price level. While the IRF will be nice and smooth, it will not return to the initial steady state. Whenever there is a unit root in the model, you will get a collinearity warning in model_diagnostics (which is actually a feature of this model in this case and not a problem to worry about).

Thank you for the prompt response. The unit root interpretation makes a lot of sense. The only problem is that looking at the autocorrelation function, price level does show a tendency to decay (takes around 20’th order to converge to zero, which in practice a usual Dickey Fuller test would not reject a unit root, but in theory I would expect it to not decay at all) and the impulse response function of the price level is convergent towards zero as well (if one sets irf=100, then it does converge). Inspecting the simulated data (periods=10000), the plot does look like its centered around the mean of unity (if STEADY_STATE§=1). Could there be another interpretation?

In a model, there is no reason for ADF-test. The check-command will show you the eigenvalues. One of them should be 1. A standard Taylor rule will give rise to a unit root in prices, while a money growth rule will yield a determinate price level. You can verify this in the new edition of Gali’s textbook, which shows the IRFs for prices. Of course there are shocks and specifications where the price level response is not permanent.
You can also see the problem analytically. We have an uniquely determined inflation rate. The price level response is then given by

P(t)=Pi(t)*P(t-1)=Pi(t)*Pi(t-1)*P(t-2)=Prod_i=0^t Pi(t-i)*P(0)

Only in special cases will the product of inflation rates evaluate to exactly 1 so that P returns P(0)

Then would it be correct to deduce that in a setting when price level response is not permanent, collinearity that is detected by model_diagnostics is a cause of concern?

That always depends on the economics. If the collinearity comes from a unit eigenvalue one needs to check whether there is an economic reason for the unit root or whether this is due to a bug. If there is a good reason for this, one need not worry. Note, however, that there might be more than one collinear relationship, so one needs to check all.

That makes more sense now.

One more question about collinearity. If it results from using the command STEADY_STATE(variable_name) entered into the model block, but the steady state itself is analytically involved and it is cumbersome (typing it in terms of parameters is a long expression), is there a way how to get around that? Is that why people use the external steady state file?

You are confusing something. If collinearity arises after using the steady_state operator, what you have done is exactly to remove the capability of the model to determine the steady state of that variable as a function of the parameters.

Take again the example of the Fisher equation and steady state inflation. If you exogenously provide the steady state inflation pibar (which cannot be endogenously determined), you can endogenously compute the steady state nominal interest as the product of real interest and inflation in steady state. Because this works in the model, you can use a Taylor rule like

r/steady_state(r)=(pi/pibar)^1.5

This equation will uniquely pin down the steady state of pi and r, because pi now needs to be equal to pibar in steady state. Therefore, in any other equation you can use the steady_state operator as you see fit. No problem in determining anything.

Similarly when you exogenously specify the steady state nominal interest rate, you can endogenously compute the steady state inflation rate and use

r/rbar=(pi/steady_state(pi))^1.5

This equation will again uniquely pin down the steady state of pi and r. Therefore, in any other equation you can use the steady_state operator as you see fit.

The problem arises when you try to simultaneously compute the steady state of inflation and the nominal interest endogenously from the model and use

r/steady_state(r)=(pi/steady_state(pi))^1.5

This cannot work, because any steady state combination satisfying the Fisher equation is a steady state and there are infinitely many such combinations. In contrast, in the two other examples, either picking rbar or pibar solves this indeterminacy by picking one particular combination.

Summarizing: if the steady state is uniquely determined within the model, there is no problem with using the steady state operator. The problem only arises when you use it to replace an exogenous relation by an endogenous one. The model then will be underdetermined.

1 Like

It seems to work well in the context of the Taylor rule and Fisher equation and generally have no collinearity problems with the monetary policy rules anymore.

I guess my question was not clear enough. I am considering the typical intermediation costs in the fashion of Turnovsky (1985) and Schmidt-Grohe & Uribe (2003) that has been used throughout the small open economy literature:

z=r_star+alpha*(exp(-D-STEADY_STATE(D))-1)

Where z is the gross rate of return on foreign bonds, r_star is the gross foreign nominal interest rate, alpha>0 and D is the stock of foreign assets. I find that if I use the STEADY_STATE(D), model_diagnostics crops up a message that there is collinearity with several equations that incorporate z. But, when I replace STEADY_STATE(D) with the number obtained from the oo_.steady_state, then there is no collinearity present. So essentially, if the STEADY_STATE(D) is replaced with its counterpart expressed in terms of parameters, then there is no problem. Except the solution to STEADY_STATE(D) is really cumbersome. I tried defining a parameter:

parameters ... gamma=STEADY_STATE(D) ...

But its not compatible with dynare. I could alternatively type the whole solution in terms of parameters and then define gamma, but I was hoping there is a more efficient way. I’m just confused as to whether I should do anything about this and if so - how?

Thanks ever so much for the detailed comments.

I may be missing something, but in these types of models you use that particular equation to pin down the steady state level of debt that is otherwise no uniquely determined within the model. What you call STEADY_STATE(D) is actually the target level of debt and must therefore be expressed as a function of deep parameters to provide the necessary link. Essentially, you need to calibrate the model correctly, which is not possible endogenously. What you could to is used a model-local variable with the pound operator to define that target level as a function of the deep parameters.