Oo_.planner_objective_value

Hi,

Could you let me know what “oo_.planner_objective_value” reports in “ramsey_policy”?
It is not the unconditional (theoretical) mean of the welfare.
Is it steady state of welfare plus .5Delta^2 in the solution of the second order approximation?
Is it steady state of welfare plus .5
Delta^2 plus higher order terms (namely parameter multiplied with variances) in the solution of the second order approximation?

Thanks in advance,
Ippei

1 Like

No, it is the conditional welfare.
The formula is (evaluate_objective_function.m)

planner_objective_value = Wbar+Wy*yhat+W_u*u+W_yu*yhat*u ... + 0.5*(W_yy*yhat^2 + Wuu_u^2+W_ss);
That is, it is a full second order expansion. Note that the concept of welfare is

W_t=U_t+beta*E_t (W_t+1)

so that today’s endogenous and exogenous states are contained in the information set at time t and affect U_t. You can specify these initial conditions with histval.

Dear Johannes,

Many thanks for your swift reply.

I would like to compute the unconditional welfare using “ramsey_policy,” which is trivial with the previous dynare code by Levin and Lopez-Salido. This is because W(t)=U(t)+betaW(t+1) must be added in the system of equation. Adding W(t)=U(t)+betaW(t+1) in “ramsey_policy” causes a problem since this becomes a redundant constraint.

Given the solution of the second order approximation:
Y(t)=.5DELTA^2+AAY(t-1)+BBu(t)+.5CC*(Y(t-1)xY(t-1))+.5DD(u(t)xu(t))+EE*(Y(t-1)xu(t)),
where all variables are deviations from the Ramsey steady states,
is the only way to compute the unconditional welfare that
E[Y(t)]=.5DELTA^2+.5CCvar(Y)+.5DDvar(u)+EEcovar(Y(t-1)Xu(t)))?

On the other hand, just to make is sure, what oo_.planeer_objective_value (the conditional welfare) with initial values being at the Ramsey steady state reports
E(t)[Y(t)]=.5DELTA^2+BBu(t)+.5DD(u(t)xu(t)).

Do I understand the procedures in dynare correctly? Also, if there is any easy way to compute the unconditional welfare, I would like to know this.

Sorry for posting a question again after your clear answer.

Cheers,
Ippei

Sorry, but I will have to take a deeper look into this, which might take some time.

Hi Ippei,

in your previous post, there shouldn’t be any u(t) because u(t) is observed at the beginning of the period and belongs by assumption to the information set of E_t()

If you want the unconditional welfare (a tricky concept), you could do the following

  1. include utility = u(y_t), the period utility function, among the variable/equation of the model. This doesn’t generate the same problem as adding the definition of welfare.
  2. then you can compute E(W_t) = E(u(y(t)) + \beta E(W_{t+1}) as
    E(W_t)= E(u(y_t))/(1-\beta) because E() being unconditional expectation, E(W_t) = E(W_{t+1}) and you can read E(u(y_t)) as the second order approximation to the mean of u
  3. In order to get the second order approximation of the model under Ramsey, you must indicate
    ramsey_model(planner_discount_factor=…);
    stoch_simul(order=2);

Hope it helps

Michel

2 Likes

Dear Michel and Johannes,

Many thanks for your replies. I thought that I have not received answers. I really appreciate the way to compute unconditional welfare in the current Ramsey routine.

Cheers,
Ippei

1 Like

May we have an example of the .mod file that does this experiment?

PS: I have done the LQ model Ramsey model. Then I add to my policy model the required number of multipliers and applied the old version (4.) of policy evaluation for ramsey.static and policy oo_ file. It worked. But it feels like not the easiest way. Can I do it easier without computing LQ by hand?

For which exercise exactly do you need an example?

Any model which first computes Ramsey policy. Then use another mod file with the same constraints but a different policy. The objective function is the same. as a result , I would like to use the value of the objective. now I
use something as “oo_.mean (1)” to collect the policy result, and the welfare is the first variable in the model. But the divination from the steady state value is huge, much bigger than when I use LQ

How about examples/Ramsey_Example.mod · master · Dynare / dynare · GitLab

Can you please let me know where should I look for that value of the objective when the Taylor Rule is implemented? Thank you

I added that at examples/Ramsey_Example.mod · a2fde7c832f0d6b784df238bc58dd7399ab37ec7 · Dynare / dynare · GitLab

Thank you.
So you added line 162,
Welfare=log(C)-chi/2 * h^2+beta * Welfare(+1);
and you use order=2 for stochastic simulation.
Should I look then at the reported mean of the Welfare?
Will it produce the same deviation from the steady state as in the LQ model?

Yes, you should look at mean welfare, which corresponds to unconditional welfare. What do you mean with

?

I have done the same models in LQ and non-linear. they provide the same IRFS, but the difference in Consumption equivalent is 2 times bigger.
Main_NL3.m compares nonlinear policies
Main_LQ3 compares LQ approximation
Ramsey_NL3.mod (863 Bytes)
Ramsey_LQ3.mod (875 Bytes)
Policy_NL3.mod (877 Bytes)
Policy_LQ3.mod (820 Bytes)

Main_NL3.m (545 Bytes)
Main_LQ3.m (467 Bytes)
evaluate_planner_objective_LQ3.m (3.1 KB)

I am sorry, I am still struggling. Now I am doing OSR and I want to be sure that the difference between OSR policy and Ramsay’s policy gives the same result as OSR policy.
I run my OSR, take the optimal parameters and put them into the mod file
the OSR policy is in is negative to Ramsey evaluation, I think
result_osr=-oo_.osr.objective_function
Can you please give us a file which makes such a comparison and replicates
oo_.osr.objective_function with reasonable precision.
Thank you very much

I think my main point is that the traditional approach is LQ, quadratic objective and linear constraint. Dynare offers either order 1 or order 2 set up. While it would be interesting to have order 2 objective and order 1 constraints. OSR is not a solution, because it does not allow to change the social planner discount factor.
That is why I think it would be very useful if LQ set up were available in Dynare. Now I use your old file to evaluate policy objective, it was more transparent in previous versions. Of course people who do not cut conners produce their own files. But it would be nice to have a buil in LQ set up in stochastic simulation
mode

Sorry, but I still don’t understand what exactly you are trying to do.