Hi all,

I have the following problem/question.

Suppose that the central bank wants to minimize the volatility of inflation and consumption, i.e. the Loss Function is the sum of inflation and output variances.

I first run the Ramsey policy (see model_Ramsey.mod) where

*planner_objective PI^2+C^2;
ramsey_policy(planner_discount=0.99);*

and found that the Loss Function is 15.5275 (this is NOT the “Approximated value of planner objective function” reported by Dynare)

Then I run the same model with osr (actually I did a grid search for the coefficient in the Taylor rule, see model_osr.mod) and found that the Loss Function is 1.6949.

So the loss function under the Ramsey policy is bigger than the loss function obtained with an optimal Taylor rule, which is not what I expected to find (I used the same discount factor, 0.99).

Is there an explanation for this result?

Many thanks for your help

Best

Fabio

model_osr.mod (1.54 KB)

model_Ramsey.mod (1.19 KB)