Can we say one monetary policy reaction function is better than the other?

For example, for the ECB, many papers have estimated different versions of the monetary policy reaction function. Is it because different authors believe differently about how the central bank sets its policy rate? And hence we cannot really say one estimated policy reaction function is better than the other?

For example, one author may say, let me see how the central bank reacts to the inflation gap (x1) and output gap (x2). Then another author would say, let me see how the central bank reacts to the inflation gap (x1), output gap (x2), and exchange rate (x3). Then some author would add commodity price; another would add, say, deficits; and another would add, say, trade balance; and another would add, say, the lag of the policy rate…and the list could go on. But here, it seems these are from a purely empirical perspective, right?

And the choice of the variables in the reaction function could also be based solely on theory? For example, the monetary policy rule with only the inflation and output gap requires that the coefficient of the inflation gap is more than 1 in new Keynesian models. So, for example, I can check that for some country A and then use that info for further analysis, right? Then, someone may ask, why didn’t you include the exchange rate, commodity price, trade balance, government deficit, some lags, etc., in the reaction function? My answer is they unimportant for the theory and the analysis I am considering. A fair answer, right? Like, the theory I have adopted is restricting the choice of the variables in the reaction function. That seems to be a fair answer to me, but I would kindly also like to know what others think, if I may ask.

I even heard arguments like the output gap might capture other variables like the deficit, trade balance, etc. because even if the central bank looks at the deficit data, for example, their focus is how that affects the inflation and output gap at the end of the day.

Maybe there are no right or wrong monetary policy reaction functions, but different monetary policy reaction functions for different questions (based on the goal of the author). And that is why we have so many of them? Or we have so many of them because we are looking for the correct one? Thanks for any answer. Trying to transition from student to researcher, but some many arguments and info out there…:). May you help if you can, if my thoughts above are correct or make sense? Thus, why so many different policy reaction functions Thank you…!!

1 Like

I think the first important distinction is whether you are talking about a normative or a descriptive exercise. If you are trying to describe the data then there is a high chance of better capturing empirical behavior by allowing for more feedback variables. But there is also a risk overfitting. The normative literature has shown that optimal simple (implementable) rules with feedback in only a few observed variables reasonably well approximates fully optimal behavior. These two findings combined result in many researchers specifying rather simple rules that tend to perform well.
But you are right that usually, you would like to test the fit.

Many thanks, Prof. Pfeifer, for the reply. If I may kindly ask, so normative in this sense would be associated with, for example, the monetary policy rules we use in DSGE models, right? Say, I am studying or evaluating how the estimated policy rule of my country’s central bank deviates from an optimal rule that maximizes welfare. And, say, I am considering a class of simple implementable Taylor rules. That is fine, right?

I have received questions like, “the coefficients in the estimated rule (that I am comparing to the optimal) may be overestimated or underestimated because some variables or lags are missing in the estimated specified rule.” I think there is a point there if I am simply describing the data, say using OLS and just testing the fit of the model (where I could introduce new variables and more lags, for example), right?

But if I am interested in, say, optimal simple rules (with just inflation and output gap), then the estimated rule that I compare to the optimal rule should also have just these two variables, I guess. So here, in a normative exercise, the theory is justifying the choice of the variables in the rule, right?

On the other hand, in a descriptive exercise, the coefficients in the estimated simple Taylor rule, for example, may actually be overestimated (due to omitted variables), but that is not the point or focus of a normative exercise. Is my understanding here correct? Many thanks!!

And even in descriptive models, researchers report results like if the inflation rate increases by 1%, the central bank raises the interest rate by 1.5%. Then another researcher will add more variables and lags and say, if the inflation rate increases by 1%, the central bank raises the interest rate by 0.5%.…and so on. So, in this case, which result is more ‘true’ kinda depends on how well the empirical model fits the data, right? And then, checking metrics like R-squared? I put true in quotation marks here because sometimes, I am like, who knows what is true…and, thus, tend to treat every result as another opinion from a different perspective (using a different model). Maybe I am going too far with this…but any advice on these issues will be very much appreciated Thanks.

If you are fitting empirical data, then you are concerned with actual central bank behavior. In that case, you can run different model estimation exercises and compare the models with different Taylor rules using the marginal data density. That’s the proper equivalent to comparing R^2.

1 Like

Dear Prof. Pfeifer, may I kindly ask a related question?

  1. Let’s say I can numerically compute optimal monetary policy under Ramsey allocation for some economy. My focus is output and price stability.
  2. Then, in Gali’s book, he tries to get an analytical expression for the numerically estimated optimal rule (say under discretion), but not implementable. This general rule does not contain lags, for example.
  3. Then, we can approximate the optimal rule with an implementable rule, say, i_t = \beta_1 \pi_t + \beta_2 gap_t. This rule does a better approximation, it seems.
  4. Why would there be the need to use other rules to approximate the optimal rule? For example, I have seen other papers use i_t = \beta_1 \pi_t + \beta_2 gap_t + AR(1) and AR(2) terms. Thus, AR terms for i_t, \pi_t, gap_t. And they do not say why the lags are there. Sometimes, it seems they are there because they are simply alternatives to the basic form (with no lags), and the authors even compare the performance of the rules with lags with the rules without lags. It appears the goal is ‘rule searching,’ if I could say that. But is it? Is there a well-known justification for these extensions besides the fact that they are different and plausible? Thanks.
  1. The more complicated rules nest the simpler ones. The coefficients on the additional terms may simply be estimated to be zero.
  2. The simple model does not have endogenous state variables, making rather easy to solve for optimal policy. In more complicated models, there is a lot more of endogenous persistence due to endogenous states. In this case, it makes sense for the rule to be more complicated.
1 Like

I see. Many thanks!! hmmm…but how do you know how complex you should make the rule if the model is complicated? By one’s own judgment and theoretical justifications?

Or maybe you look at the persistence of the interest rate, inflation, and output data, for example. And if autocorrelation is high, you include lags in the rule? Is there some general guidance here that is typically used?

captures it well. You should be able to defend your choices, which often involves theoretical/empirical justifications. In case of US Taylor rules, the starting point typically is the dual mandate, which argues for both inflation and unemployment (or output as a proxy) being present.

1 Like