Hi,

I am trying to find a way to have a permanent shock. There are solutions in my mind that I cannot use to extract welfare (traditionally) later.

- I allow unit roots in AR(1) shock process but we can not have theoretical moments with unit roots. So, I can not have welfare measurement.
- Using perfect foresight, because everything is deterministic, I guess we cannot also have welfare (i.e mean of utility).

Any other suggestions are appreciated. Thank you so much.

What exactly is the experiment you have in mind? What is not feasible is having permanent shocks every single period in a stochastic context.

Many thanks,

I want to have a permanent shock in the first period and solve it under a stochastic environment, using AR(1) like this.

A_t= A_{t-1} +\epsilon

But it is subject to unit root so I cannot have some variance measurement of some variables.

That is not a well-defined question. The agent would expect that process to continue, implying that A_t would grow above all bounds over time.

Many thanks

I guess for a one-time shock with a unit root like this. We shift the economy to a new steady state. At least that is what I observe from the IRF. For example, I want to impose a permanent tax using the above process. I don’t really get why A_t can keep growing.

With `stoch_simul`

you do rational expectations modeling. This implies that agents know the shock distribution. So the process is not cofined to the initial permanent shock, but rather it’s a random walk for all periods. And random walks have no bounds.

Thank you so much,

Is there any way to impose a permanent shock in a stochastic setup? I understand why theoretical moments cannot be achieved under unit root and I am looking for a solution for this.

Again, what exactly are you trying to achieve? Do you simply want to evaluate conditional welfare for a new level of an exogenous variable? If you don’t want that the unit root process to continue, then maybe perfect foresight is the right way to proceed.

Many thanks!

That is what I want to do, evaluate conditional welfare for a permanent shock. Is it feasible under a perfect foresight setup?

Yes, that should be straightforward to do.

Thank you.

Normally, welfare is measured in oo_.mean of welfare. But somehow, I cannot find something when I solve it with perfect foresight. Sorry If I make any mistake.

That would be unconditional welfare, which is not the object you are interested in. `evaluate_planner_objective`

should be able to provide you with conditional welfare after a perfect foresight simulation.

1 Like

Hi,

Thank you so much.

Sorry for the follow-up question, If I want unconditional welfare, can I use extended_path to get it or there is currently no way to get it?

I try to implement extended_path but there is no theoretical moment

No, extended path cannot compute theoretical moments. For complicated decision rules, that is generally not feasible. But you may be able to compute simulated moments.

Many thanks,

I guess simulated moments of the utility function (i.e. mean) can be accepted also. I am a bit confused about how to get the simulated moments. I use some mod files in tests/ep · master · Dynare / dynare · GitLab and not sure how to get it. With periods>0, usually, I can have it with a stochastic setup. Is it a mean of interested variables in oo_.endo_simul ?

Second, I tried to use you valuable comment. `evaluate_planner_objective`

seems to be used with Ramsey. However, I want to pin down some parameters using optimal welfare analysis. It seems like I can’t do it with `evaluate_planner_objective`

. Am I correct?

Thanks a million.

Can you elaborate what exactly you are trying to do? Are you using `ramsey_model`

? Also, @stepan-a may be able to help with extended path.

Thanks,

I don’t want to use `ramsey_model`

. I actually want to do optimal monetary policy through maximising welfare. But, I care about optimal parameter values not the path of the policy rate. So, I want to have a measurement of welfare.

In a standard stochastic setup, it is quite easy to get unconditional welfare

Welfare_pos=strmatch(‘Welfare’,M_.endo_names,‘exact’);

Welfare_uncon=oo_.mean(Welfare_pos);

So I can choose the optimal parameters to maximize this Welfare. You said that I can get simulated moments using ‘extended_path’. So, I want to have a simulated moment of Welfare. I am looking for a way to get it with extended_path or perfect foresight.

Second, I have never used `evaluate_planner_objective`

. Can it be implemented but not related to any ‘ramsey_model’, i.e. it gives you the conditional welfare value in common sense. Please provide an example it is feasible).

Welfare_pos=strmatch(‘Welfare’,M_.endo_names,‘exact’);

Welfare_con = oo_.dr.ys(Welfare_pos)+0.5*oo_.dr.ghs2(oo_.dr.inv_order_var(Welfare_pos));

Thank you so much

Hi,

Any idea if I can look at the start point of computing simulated moments using perfect foresight or extended path?

Many thanks.

So you want to do optimal simple rules. But I still don’t understand the experiment you have in mind. You are talking about unconditional moments, but at the same time you have a particular experiment with a permanent shock in mind, which suggests some conditionality.

Many thanks,

I want to simulate the permanent tax shock and optimal simple rules during this transition. I can also work with conditional welfare also but I cannot find how to use it without ‘Ramsey_model’ setup.