Ramsey policy

Hi, I am currently using the coomad ramsey_policy, in order to compare welfare results using it and those obtained using the osr command.
There is some caveat in doing that? I explain better: value that I obtain with a ramsey policy (loss function defined as a weighted average of inflation and output variances) seems to be bigger than ones that I obtain with an optimal taylor rule (with the same weight put on the variables in the loss function)) which is not what I expected to find. Are theese results comparable or not?
There is something to do with the discount factor, in fact as I decrease it I (obviously) obtain smaller values of the objective function.
Thanks for your hel

I think that you cannot compare the two directly.

In the Ramsey policy case, with a loss function as the planner objective, the welfare that you obtain is a discounted sum of the future squared deviations from the mean.
In the simple rule case, the welfare is the expectancy of the squared deviations, which is equal to the limit of the finite-sample average of future squared deviations.

These are two different mathematical sums, and I don’t see an easy way of relating the two. You would need to set the discount factor equal to 1 in the Ramsey policy case, but then the sum would not converge…