Discounting under Optimal Monetary Policy

Thanks for the code!

What is the difference between conditional vs unconditional loss?

Now I see that you are indeed right.
The economic reason is that J_1 is the loss from the time t=1 perspective, thus period t=1 is not discounted. Better define J_{t_0}=E_{t_0}\sum_{t={t_0}}^\infty\beta^{t-t_0} \{ \pi_{t}^2+\omega_y y_{t}^2 \} .
Hence, J_1 = E_1\sum_{t=0}^\infty\beta^{t} \{ \pi_{t+1}^2+\omega_y y_{t+1}^2 \} is the correct formula and J_0 = \pi_0^2+\omega_y y_0^2 + \beta E_0 J_1.

\omega_y is the preference parameter of the CB for output. You can call it ome_y=0.05 in the code. Your declaration omega_y is also fine.

Which lines of code do I have to modify if:
1.) I would like to change or augment the Taylor-type rule. (optimize different parameters)
I hope this requires only the standard manipulations in the mod-file. Change parameters , model-block and osr_params.
2.) I would like to change the loss function. E.g. J_{t_0}=E_{t_0}\sum_{t={t_0}}^\infty\beta^{t-t_0} \{ \pi_{t}^2+\omega_y y_{t}^2 + \omega_i i_t^2\} ?

Best regards,
Max

The difference is whether you use a conditional expectations E_{t_0} (i.e condition on the states at time t_0, typically something like the steady state) or the unconditional one E.

  1. You would need to change the rule in the model block and change the x_start and x_opt_name entries to include the set of parameters to be optimized over.
  2. You need to change the J defined in the model block along the lines above.

I guess Di Bartolomeo et al. (2016) use the conditional loss. (see the end of p. 380, beginning of p.381)

Optimal monetary policy in a New Keynesian model with heterogeneous expectations
Journal of Economic Dynamics & Control. 73 (2016)

So in order to minimize their loss with an OSR, the conditional loss would be required.

What to change ?
By the way, ramsey_policy is minimizing the conditional or unconditional loss?

W.r.t
1)
As you wrote lines 52-56 are most important to change. (additional to the model-block and the declaration of paramters, osr_params). Do you impose bounds on the parameters under x_opt_name ? And why?

x_start=[rho_TR del_pi del_y]';
x_opt_name={'rho_TR',0,1
            'del_pi',1,Inf
            'del_y',0,Inf
            };

w.r.t
2.) Changing the loss requires only changing line 39 plus parameters and assigning numbers.
But, if I use the model consistent loss of Di Bartolomeo et al. (2016) I have to expand the squares of their loss function and match it with the first term in J = infl^2+omega_y*y^2+b*J(+1), namely match it with infl^2+omega_y*y^2. Right? (I already have done the expansion)

The expanded loss is
\begin{align} L_t &=\frac{1}{2} \Big[ \lambda_1 \pi_{t-1}^2 + \lambda_2 y_t^2 + \lambda_3 \pi_t^2 + \lambda_4 {(c_t^1)}^2 \nonumber \\ &+ \lambda_5 \pi_{t-1}y_t +\lambda_6 \pi_{t-1}\pi_t + \lambda_7 \pi_{t-1} c_t^1 + \lambda_8 y_t \pi_t + \lambda_9 y_t c_t^1 + \lambda_{10} \pi_t c_t^1 \Big] \end{align}
So I match L_t above with infl^2+omega_y*y^2 in line 39 and define \lambda_1 to \lambda_{10}? That is the total job?

Parameters of the loss depend on the model parameters a and t.
If I now update a in a loop I use the set_parameter_value command for a and \lambda_1 to \lambda_{10}?

  1. If you want conditional welfare, you have to obtain it in the objective function with something like
[oo_.dr,info,M_,options_,oo_] = resol(0,M_,options_,oo_); %get decision rules
if info(1) %filter out error codes
    fval=10e6+sum([x_opt].^2);    return;
end

%% simulate conditional welfare
initial_condition_states = repmat(oo_.dr.ys,1,M_.maximum_lag); %get steady state as initial condition
shock_matrix = zeros(1,M_.exo_nbr); %create shock matrix with number of time periods in rows
y_sim = simult_(initial_condition_states,oo_.dr,shock_matrix,options_.order); %simulate one period to get value

fval=-y_sim(strmatch(target_name,M_.endo_names,'exact'),2); %read out welfare at steady state
  1. With Ramsey, you can obtain both conditional and unconditional welfare.
  2. Yes, I impose bounds. The reason is that i) often the objective becomes very flat for higher parameter values (I did not set an upper bound above, but usually you should) and ii) the parameters often used have natural bounds (stationarity and Taylor principle)
  3. I cannot completely follow the description in the last post, but it sound right. Note that the loop for set_param_value loops over all parameters defined in the x_opt_name cell passed to the objective. So you would not need to modify anything here.

Hi,
Thank you very much this very useful discussion.

I was wondering: How much sensitive are optimizers such as get_minus_welfare_objective.m and get_consumption_equivalent_unconditional.m to the arbitrary initial values that are fed into them?

Thanks a lot

The results are very sensitive. You are looking for a global optimum, but most optimizers are designed to find local minima.

Thanks a lot for the feedback, Prof. Pfeifer,

Now I also see that you had answered to a question very similar to mine here:

I suppose that if the interval within the bounds is short enough, say \phi_{\pi}\in [0,5], and \phi_{other \quad variable}\in [0,2.5], then it is also possible to show results with a couple of different initial guesses, to enhance credibility…As your reply seems to imply, results from OSR are subject to the same kind of fragilities.

Most authors seem to provide a reasonably well-explained initial guess, and then run the numerical optimizer to search for optima coefficient values. I am thinking about Gertler and Karadi (2011) “A Model of Unconventional Monetary Policy”, Angeloni and Faia (2013) “Capital Regulation and Monetary Policy with Fragile Banks”, Signoretti and Gambacorta (2014) “Should Monetary Policy Lean Against the Wind?”, Lambertini, Mendicino, and Punzi (2011) “Leaning Against Boom–Bust Cycles in Credit and Housing p Prices”, just to mention a few. Do you agree?

Actually, most of the time authors do not explain how they made sure not to get stuck at local optima. But they should do so.

1 Like