I have the following problem regarding my dynare output. I want to compare the values of the loss function for an OSR and a Ramsey policy in models that are otherwise identical. The objective function of both is also the same as far as I understand. However, for the Ramsey policy I get a value for planner objective of approximately 13.5942 and for the OSR I get a value for the objective function of .2334. Can someone explain this to me? Or is there something wrong with my program(s)?
Thanks a lot for your help

There is a fundamental difference in the objective functions. OSR minimize a sum of discounted unconditional variances while Ramsey minimizes a discounted sum of conditional variances in your case.

Is this the case? That Ramsey computes the inter-temporal loss function value L_rams=loss_t_rams/(1-beta) while with OSR, Dynare computed the period loss:loss_OSR=loss_t_OSR.
Hence, then the inter-temporal loss for OSR is L_OSR=loss_t_OSR/(1-0.99)=22.34.

Also, then, how do we verify that OSR is bigger than Ramsey?

This is somewhat complicated and will take more time. Part of the problem is the difference between conditional and unconditional. Take an AR1-process. Its conditional variance will be the standard deviation of the error term, but its unconditional variance will be
sigma_epsilon^2/(1-rho^2).

Okay so I conjectured the following. The way the objective loss is constructed in my OSR program is by calculating lambda_xvar(y_hat)+var(pi_hat)+lambda_Rvar(R_hat), where the var(.) come from oo_.var. I verified that manually calculating this gives the same value as objective_function does.
So now I used the same procedure to calculate the corresponding loss of this objective function if the Ramsey policy is used. I.e. I made the same calculation as above, but now the var(.) are taken from oo_.var after having run Ramsey.
Is this a valid way of comparing welfare under both policies?

For comparison between policies you can use any welfare criterion/loss function that is consistent across approaches. With what you describe you use the same criterion for both approaches, which is fine. The problem might be that the policies your are looking at were computed under different loss functions than you are considering now for your comparison.

See the manual on this. oo_.var stores the unconditional variances. Conditional variances are not reported in Dynare. What type of conditioning did you have in mind?

Because my plan is similar to @tanvintyl5. I would like to calculate the welfare loss by OSR (through Dynare function or by looping) and by Ramsey. Then compare them. Since I heard of the welfare can be classified into conditional and unconditional, I’m concerned that both OSR and Ramsey should be apple-to-apple or in the same manner.

Can an osr optimal policy ‘exactly’ implements Ramsey allocation? @stepan-a mentions that in his paper. If I may ask, what does exactly mean in this context? For example, IRFs and welfare estimates should match exactly? In dynare, welfare loss matches, but not IRFs, for example.

And may I also ask a question in the same paper. Taylor rule parameters estimated with osr are different under each structural shock. I expect that to happen since shocks do not affect the economy the same way, and maybe the central bank can react appropriately depending on the shock. However, Stepan mentions the need to derive osr rules which are robust across structural shocks. Would central banks want that? Like responding the same way irrespective of the type of shock?