% find position of variables in variable_weights
L_pos=strmatch('L',M_.endo_names,'exact');
q_pos=strmatch('q',M_.endo_names,'exact');
Y_pos=strmatch('Y',M_.endo_names,'exact');
% find position of variables in M_.endo_names
L_pos_M_.endo_names=strmatch('L',M_.endo_names,'exact');
q_pos_M_.endo_names=strmatch('q',M_.endo_names,'exact');
Y_pos_M_.endo_names=strmatch('Y',M_.endo_names,'exact');
weight_grid_L=0:0.1:1; //I want to have this weight on credit fixed at 1. Not sure how to go about it !! Please help !
weight_grid_q=0:0.05:0.5; //Only allow the weights of q and Y to vary from [0, 1]
weight_grid_Y=0:0.1:1;
n_grid_points_L = length(weight_grid_L);
n_grid_points_q = length(weight_grid_q);
n_grid_points_Y = length(weight_grid_Y);
var_L_CcCR=NaN(n_grid_points_L,n_grid_points_q,n_grid_points_Y);
var_q_CcCR=NaN(n_grid_points_L,n_grid_points_q,n_grid_points_Y);
var_Y_CcCR=NaN(n_grid_points_L,n_grid_points_q,n_grid_points_Y);
for grid_iter_L=1:n_grid_points_L
for grid_iter_q=1:n_grid_points_q
for grid_iter_Y=1:n_grid_points_Y
M_.osr.variable_weights(L_pos,L_pos) = weight_grid_L(grid_iter_L);
M_.osr.variable_weights(q_pos,q_pos) = weight_grid_q(grid_iter_q);
M_.osr.variable_weights(Y_pos,Y_pos) = weight_grid_Y(grid_iter_Y);
oo_.osr = osr(M_.endo_names,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
if oo_.osr.error_indicator==0
var_L_CcCR(grid_iter_L,grid_iter_q,grid_iter_Y)=oo_.var(L_pos_M_.endo_names,L_pos_M_.endo_names);
var_q_CcCR(grid_iter_L,grid_iter_q,grid_iter_Y)=oo_.var(q_pos_M_.endo_names,q_pos_M_.endo_names);
var_Y_CcCR(grid_iter_L,grid_iter_q,grid_iter_Y)=oo_.var(Y_pos_M_.endo_names,Y_pos_M_.endo_names);
end
end
end
end
because the variances need to be three-dimensional objects if you trace out all the possible combination of three parameters.
In the above example of the code you provided earlier, will I be correct to re-specify the code as:
weight_grid=0:0.01:1;
n_grid_points=length(weight_grid);
var_y=NaN(n_grid_points,1);
var_pi=NaN(n_grid_points,1);
for grid_iter=1:n_grid_points
M_.osr.variable_weights(pi_pos,pi_pos) = 1; % Fixed the weight on pi to 1.
M_.osr.variable_weights(y_pos,y_pos) = weight_grid(grid_iter);
oo_.osr = osr(var_list_,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
if oo_.osr.error_indicator==0
var_y(grid_iter)=oo_.var(y_pos_var_list_,y_pos_var_list_);
var_pi(grid_iter)=oo_.var(pi_pos_var_list_,pi_pos_var_list_);
end
end
If I only want the weight on y_hat to vary in the range [0, 1] while the one on pi_hat is fixed at 1?
Dear Prof Pfeifer,
after following the discussion above, I was trying to replicate Iacoviello (2005) in order to plot the policy frontier, but the code doesn’t run:
Error using cmaes (line 316)
Initial search point, and problem dimension, not determined
I made some tests taking different values of lambda_by. In the process, comparing for same values of lambda_by, I realized that when I loop using lambda_by = (0.5:0.1:1.5), I did not obtain the same values of optimal parameters as using lambda_by = (0.5 1 1.5). Is there something wrong in the code?
My second question is related to your proposition. If I run a simple osr block before making the loop and then implement exactly the same structure that you suggested, the weights are reset but the optimal parameters are not. These are the same for any lambda_by
%simple OSR
optim_weights;
inflation 1;
output 0.2;
Credit_GDP_gap 1;
D_intpol 0.1;
D_ltvh 1;
D_ltve 1;
D_rwr 1;
end;
osr_params phi_pie phi_y chi_ltvh chi_ltve chi_rwr;
osr(opt_algo=9, noprint, nograph);
%Loop over lambda_by
options_.nofunctions=1;
options_.nocorr=1;
options_.noprint=1;
options_.irf=0;
options_.silent_optimizer=1;
options_.osr.opt_algo=9;
%Find position of vars
Credit_GDP_pos=strmatch('Credit_GDP',M_.endo_names,'exact');
lambda_y = 0.2;
lambda_by = (0.5:0.1:1.5);
opt.model3=NaN(length(lambda_by),8);
for k = 1:length(lambda_by)
tag_by = lambda_by(k)
M_.osr.variable_weights(Credit_GDP_pos,Credit_GDP_pos) = lambda_by(k);
oo_.osr = osr(M_.endo_names,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
if oo_.osr.error_indicator==0
opt.model3([(length(lambda_by))-(length(lambda_by)-1):(length(lambda_by))],1)= lambda_y;
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),2)= lambda_by(k);
opt.model3((((length(lambda_by))-(length(lambda_by)-1))),[3:8])=[oo_.osr.optim_params.phi_pie oo_.osr.optim_params.phi_y, oo_.osr.optim_params.chi_ltvh, oo_.osr.optim_params.chi_ltve, oo_.osr.optim_params.chi_rwr,oo_.osr.objective_function];
vv.model3([(length(lambda_by))-(length(lambda_by)-1):length(lambda_by)],1)= lambda_y;
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),2)= lambda_by(k);
vv.model3(((length(lambda_by)-(length(lambda_by)-1))),[3:8])=[oo_.var(inflation_pos,inflation_pos), oo_.var(output_pos,output_pos), oo_.var(Credit_GDP_pos,Credit_GDP_pos), oo_.var(D_ltvh_pos,D_ltvh_pos), oo_.var(D_ltve_pos,D_ltve_pos), oo_.var(D_rwr_pos,D_rwr_pos)];
end
end
Thank you very much for your advice and suggestions.
I find out what was the problem (or my misunderstanding of the osr function). The first osr update M_.params vector with the optimal ones. In the next run, it re-optimize the policy rule using as the initial value of the loss function computed with the variances from the previous run (which in their place were computed using the optimized parameters)
Is there a proper way to force osr to take the initial param values (for the policy rule) for each iteration?
What I did is:
I am not sure I understand. You should make sure your OSR results are independent of the initial parameter choices. But your post suggests they are not.
Thank you for your answer. Actually, I did not get what you are suggesting. For instance, if I do not fix the initial condition for the optimisation, how can I compare the different values of the optimized variances?
My initial inconvenient was related to the fact that OSR optimized, at each iteration of the loop, a different initial value for the loss function. The latter was not related only to variations of weights (lambda_by), but also to a different value of the variances that the programme used to compute the initial loss function. And in fact, these variances were the variances computed in the previous iteration.
Is the latter a bad practice if I want to use OSR for comparing optimal variances of different values of weights?
OSR means you are looking for a global optimum in your objective function. That global optimum should not depend on where you start looking for your optimum. If you get different optima depending on the starting values, some of them must be local ones. If that is what is causing your different results, you need to change your approach, e.g. by trying a global optimizer.
I am stuck at similar point. I wish to run OSR in loop for different parameter value. The main problem is that my optimal weights are composite parameter. If I declare them as parameters, they are not updated every-time in loop . To correct this problem, I made them local variable but, then, OSR command is showing error stating that local variable are not allowed outside the model block.
I need to find to a way by which, composite parameters are updated in each loop and then this updated parameters are called in "optimal weight " block in OSR command section.
Any help will be highly appreciated. Thanks in Advance.