Your code it missing the `optim_weights`

, `osr_params`

, and `osr_params_bounds`

blocks as well as an initial `osr(opt_algo=9);`

command.

@jpfeifer

Dear Professor Pfeifer,

I am a new user of Dynare and it is my first post on this forum, please apologize me if I do not follow perfectly the forum rules.

I am implementing the loop on weights that you proposed before

and I would like to ask you some questions.

So far, my code looks like this:

```
options_.nofunctions=1;
options_.nocorr=1;
options_.noprint=1;
options_.irf=0;
options_.silent_optimizer=1;
options_.osr.opt_algo=9;
%Find position of vars
inflation_pos=strmatch('inflation',M_.endo_names,'exact');
output_pos=strmatch('output',M_.endo_names,'exact');
Credit_GDP_pos=strmatch('Credit_GDP',M_.endo_names,'exact');
D_intpol_pos=strmatch('D_intpol',M_.endo_names,'exact');
D_ltvh_pos=strmatch('D_ltvh',M_.endo_names,'exact');
D_ltve_pos=strmatch('D_ltve',M_.endo_names,'exact');
D_rwr_pos=strmatch('D_rwr',M_.endo_names,'exact');
%Find position of params
phi_pie_pos=strmatch('phi_pie',M_.param_names,'exact');
phi_y_pos=strmatch('phi_y',M_.param_names,'exact');
chi_ltvh_pos=strmatch('chi_ltvh',M_.param_names,'exact');
chi_ltve_pos=strmatch('chi_ltve',M_.param_names,'exact');
chi_rwr_pos=strmatch('chi_rwr',M_.param_names,'exact');
lambda_y = 0.2;
lambda_by = (0.5:0.1:1.5);
opt.model3=NaN(length(lambda_by),8);
for k = 1:length(lambda_by)
tag_by = lambda_by(k)
M_.osr.param_names = ["phi_pie"; "phi_y"; "chi_ltvh"; "chi_ltve"; "chi_rwr"];
M_.osr.variable_weights(inflation_pos,inflation_pos) = 1;
M_.osr.variable_weights(output_pos,output_pos) = lambda_y;
M_.osr.variable_weights(Credit_GDP_pos,Credit_GDP_pos) = lambda_by(k);
M_.osr.variable_weights(D_intpol_pos,D_intpol_pos) = 0.1;
M_.osr.variable_weights(D_ltvh_pos,D_ltvh_pos) = 1;
M_.osr.variable_weights(D_ltve_pos,D_ltve_pos) = 1;
M_.osr.variable_weights(D_rwr_pos,D_rwr_pos) = 1;
M_.osr.param_indices = [phi_pie_pos; phi_y_pos; chi_ltvh_pos; chi_ltve_pos; chi_rwr_pos];
M_.osr.variable_indices = [Credit_GDP_pos; D_intpol_pos; D_ltve_pos; D_ltvh_pos;
D_rwr_pos; inflation_pos; output_pos];
oo_.osr = osr(M_.endo_names,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
if oo_.osr.error_indicator==0
opt.model3([(length(lambda_by))-(length(lambda_by)-1):(length(lambda_by))],1)= lambda_y;
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),2)= lambda_by(k);
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),[3:8])=[oo_.osr.optim_params.phi_pie oo_.osr.optim_params.phi_y, oo_.osr.optim_params.chi_ltvh, oo_.osr.optim_params.chi_ltve, oo_.osr.optim_params.chi_rwr,oo_.osr.objective_function];
vv.model3([(length(lambda_by))-(length(lambda_by)-1):length(lambda_by)],1)= lambda_y;
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),2)= lambda_by(k);
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),[3:8])=[oo_.var(inflation_pos,inflation_pos), oo_.var(output_pos,output_pos), oo_.var(Credit_GDP_pos,Credit_GDP_pos), oo_.var(D_ltvh_pos,D_ltvh_pos), oo_.var(D_ltve_pos,D_ltve_pos), oo_.var(D_rwr_pos,D_rwr_pos)];
end
end
```

I made some tests taking different values of `lambda_by`

. In the process, comparing for same values of `lambda_by`

, I realized that when I loop using `lambda_by = (0.5:0.1:1.5)`

, I did not obtain the same values of optimal parameters as using `lambda_by = (0.5 1 1.5)`

. Is there something wrong in the code?

My second question is related to your proposition. If I run a simple osr block before making the loop and then implement exactly the same structure that you suggested, the weights are reset but the optimal parameters are not. These are the same for any `lambda_by`

```
%simple OSR
optim_weights;
inflation 1;
output 0.2;
Credit_GDP_gap 1;
D_intpol 0.1;
D_ltvh 1;
D_ltve 1;
D_rwr 1;
end;
osr_params phi_pie phi_y chi_ltvh chi_ltve chi_rwr;
osr(opt_algo=9, noprint, nograph);
%Loop over lambda_by
options_.nofunctions=1;
options_.nocorr=1;
options_.noprint=1;
options_.irf=0;
options_.silent_optimizer=1;
options_.osr.opt_algo=9;
%Find position of vars
Credit_GDP_pos=strmatch('Credit_GDP',M_.endo_names,'exact');
lambda_y = 0.2;
lambda_by = (0.5:0.1:1.5);
opt.model3=NaN(length(lambda_by),8);
for k = 1:length(lambda_by)
tag_by = lambda_by(k)
M_.osr.variable_weights(Credit_GDP_pos,Credit_GDP_pos) = lambda_by(k);
oo_.osr = osr(M_.endo_names,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
if oo_.osr.error_indicator==0
opt.model3([(length(lambda_by))-(length(lambda_by)-1):(length(lambda_by))],1)= lambda_y;
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),2)= lambda_by(k);
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),[3:8])=[oo_.osr.optim_params.phi_pie oo_.osr.optim_params.phi_y, oo_.osr.optim_params.chi_ltvh, oo_.osr.optim_params.chi_ltve, oo_.osr.optim_params.chi_rwr,oo_.osr.objective_function];
vv.model3([(length(lambda_by))-(length(lambda_by)-1):length(lambda_by)],1)= lambda_y;
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),2)= lambda_by(k);
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),[3:8])=[oo_.var(inflation_pos,inflation_pos), oo_.var(output_pos,output_pos), oo_.var(Credit_GDP_pos,Credit_GDP_pos), oo_.var(D_ltvh_pos,D_ltvh_pos), oo_.var(D_ltve_pos,D_ltve_pos), oo_.var(D_rwr_pos,D_rwr_pos)];
end
end
```

Thank you very much for your advice and suggestions.

Best regards,

Jose Garcia

I find out what was the problem (or my misunderstanding of the osr function). The first osr update M_.params vector with the optimal ones. In the next run, it re-optimize the policy rule using as the initial value of the loss function computed with the variances from the previous run (which in their place were computed using the optimized parameters)

Is there a proper way to force osr to take the initial param values (for the policy rule) for each iteration?

What I did is:

```
.
.
.
Init_params = M_.params;
for k = 1:length(lambda_by)
tag_by = lambda_by(k)
M_.params = Init_params ;
.
.
.
osr_res=osr(...)
```

it seems to work.

Thank you

I am not sure I understand. You should make sure your OSR results are independent of the initial parameter choices. But your post suggests they are not.

Thank you for your answer. Actually, I did not get what you are suggesting. For instance, if I do not fix the initial condition for the optimisation, how can I compare the different values of the optimized variances?

My initial inconvenient was related to the fact that OSR optimized, at each iteration of the loop, a different initial value for the loss function. The latter was not related only to variations of weights (lambda_by), but also to a different value of the variances that the programme used to compute the initial loss function. And in fact, these variances were the variances computed in the previous iteration.

Is the latter a bad practice if I want to use OSR for comparing optimal variances of different values of weights?

Thank you for your kind answer.

Best regards,

Jose Garcia

OSR means you are looking for a **global** optimum in your objective function. That global optimum should not depend on where you start looking for your optimum. If you get different optima depending on the starting values, some of them must be **local** ones. If that is what is causing your different results, you need to change your approach, e.g. by trying a global optimizer.

Thanks, Prof. Pfeifer , it really works!

Thanks for your reply doubt in my mind has been solved now

Thanks again

Hello Everyone

I am stuck at similar point. I wish to run OSR in loop for different parameter value. The main problem is that my optimal weights are composite parameter. If I declare them as parameters, they are not updated every-time in loop . To correct this problem, I made them local variable but, then, OSR command is showing error stating that local variable are not allowed outside the model block.

I need to find to a way by which, composite parameters are updated in each loop and then this updated parameters are called in "optimal weight " block in OSR command section.

Any help will be highly appreciated. Thanks in Advance.

You can use a steady state file. See e.g. https://github.com/JohannesPfeifer/DSGE_mod/blob/master/Gali_2015/Gali_2015_chapter_5_discretion.mod where I used the `steady_state_model`

for this purpose.

Dear Sir

Thank you for your reply. However, when I am coding these parameters in steady state model (as you did) , the dynare is giving me Waring as

%%

MODEL_DIAGNOSTICS: The Jacobian of the static model contains Inf or NaN. The problem arises from:

Derivative of Equation 15 with respect to Variable ygap (initial value of ygap: 0)

Derivative of Equation 15 with respect to Variable wgap (initial value of wgap: 0)

Derivative of Equation 40 with respect to Variable wgap (initial value of wgap: 0)

Derivative of Equation 13 with respect to Variable wigap (initial value of wigap: 0)

Derivative of Equation 14 with respect to Variable wigap (initial value of wigap: 0)

Derivative of Equation 16 with respect to Variable wigap (initial value of wigap: 0)

Derivative of Equation 20 with respect to Variable wigap (initial value of wigap: 0)

Derivative of Equation 13 with respect to Variable wegap (initial value of wegap: 0)

Derivative of Equation 14 with respect to Variable wegap (initial value of wegap: 0)

Derivative of Equation 17 with respect to Variable wegap (initial value of wegap: 0)

Derivative of Equation 20 with respect to Variable wegap (initial value of wegap: 0)

Derivative of Equation 40 with respect to Variable wegap (initial value of wegap: 0)

Derivative of Equation 15 with respect to Variable sgap (initial value of sgap: 0)

Derivative of Equation 16 with respect to Variable cigap (initial value of cigap: 0)

Derivative of Equation 17 with respect to Variable cegap (initial value of cegap: 0)

Derivative of Equation 16 with respect to Variable nigap (initial value of nigap: 0)

Derivative of Equation 38 with respect to Variable nigap (initial value of nigap: 0)

Derivative of Equation 17 with respect to Variable negap (initial value of negap: 0)

Derivative of Equation 38 with respect to Variable negap (initial value of negap: 0)

MODEL_DIAGNOSTICS: The problem most often occurs, because a variable with

MODEL_DIAGNOSTICS: exponent smaller than 1 has been initialized to 0. Taking the derivative

MODEL_DIAGNOSTICS: and evaluating it at the steady state then results in a division by 0.

MODEL_DIAGNOSTICS: If you are using model-local variables (# operator), check their values as well.

%%

No such warning is coming if I have model parameters as parameters or as local variables.

At the same time the optimal value is still not same as when I am running them individually.

Can you please help me this.

Thanks in advance.

I would need to see the files.

Thank you very much for your help Sir. I am still in bit of trouble. I am running modified mod file with steady state model block separately and one time within loop by changing the value of f.

When run separately at value of f = 0.5, the optimal value is 0.68 and 0.89.

But when run within the loop, at f=0.5, the optimal value is 0.66 and 0.96.

Is this ok to have this kind of difference ?

Is this because of some kind of optimisation problem ? I am also getting

Warning:Non-Finite fitness range.

In Cmaes at 974

In dynare _minimize_objective at 360

In Osrl at 132

I am using opt_algo=9 in my code.

I will be very helpful for comment on this.

Thanks in Advance

Yes, that is strange and you should investigate this.

Hi, Prof Pfeifer, I add your above code to my mod file and I have the following error:

PP.mod (11.4 KB)

ERROR PP.driver (line 675)

var_piH(grid_iter)=oo_.var(piH_pos_var_list_,piH_pos_var_list_);ERROR dynare (line 281)

evalin(‘base’,[fname ‘.driver’]);

Thanks.

You are trying to read out `piH`

, but you only requested `pi`

as an output:

```
osr(opt_algo=9) ygap pi;
```

Thanks a lot ! I was careless. I modified the model and it works now.However ,I am confused about a question.In command window , there is:

OPTIMAL VALUE OF THE PARAMETERS:

```
phiy 31.7588
phipi 38.978
phiq 8.11791
rho1 0
```

Objective function : 0.000193508

But in oo_.osr.optim_params ,there is :

Then, for my above model, which are the optimal parameters?

That depends. You have a loop iterating over parameters at the end. The workspace will contain the results after that loop.

Specifically, I hope to get the following figure and table:

After I run the above model file ,I get a figure:

I think I get the same figure as above one that I uploaded earlier:

I am not sure whether I am correct. Still, I have no idea about how I can get the data in the table:for example,for every parameter weight ,the corresponding OSR parameters and variance ( used for computing loss ),etc.Where can I find them?

You are currently doing a loop where you compute the optimal rule parameters for each weight. You simply need to extract the results from `oo_.osr.optim_params`

at each step.