Optimal policy parameters in a non-linear model

Dear all,

I am trying to find out the optimal policy parameters for macroprudential policy in my model:

taue = 0.9*taue(-1) + 0.1*(alpha_le*(leverage_e/leverage_e(-1) - 1) + alpha_qky*((qe*ke/ye)/(qe(-1)*ke(-1)/ye(-1)) -1) +  alpha_ve*(ve/ve(-1) - 1) + alpha_cg*(cg - 1));  

0.9 is the persistence parameter. What I am trying to find out is the optimal value of alpha_le, alpha_qky, alpha_ve, and alpha_cg that optimizes the welfare.

This is how I calculate the welfare (omega_e):

util_e = log(ce) - (he^(1+psi))/(1+psi);
omega_e = util_e + beta*omega_e(+1);

To calculate welfare, I usually run stoch_simul in 2nd order.

Attach is my code. It is written in non-linear model.

My question is: how do I get the optimal macroprudential policy parameters above, subject to maximizing welfare, where my code is written in non-linear form?

It appears to me that, we can get the optimal policy parameters using osr command, but it can only be done if the model is linear. Is that correct?

Should I translate my model into a linear form (which I do not prefer as it will be time-consuming to translate the model into the linear form by hand)?

Thank you in advance.

Fed_paper_26Apr_mp_opt.mod (11.7 KB)

You need to write an objective function and maximize it. The code for the maximization part to be added to the mod-file would be

x_start=[0.01, 0.01, 0.01, 0.01]';
%define parameters to be optimized and their upper and lower bound
x_opt_name={'alpha_le',0,Inf
            'alpha_qky',0,Inf
            'alpha_ve',0,Inf
            'alpha_cg',0,Inf
            };

options_.nofunctions=1;
options_.nograph=1;
options_.verbosity=0;

%set noprint option to suppress error messages within optimizer
options_.noprint=1;
options_.TeX=0;
% set csminwel options
H0 = 1e-2*eye(length(x_start)); %Initial Hessian 
crit = 1e-8; %Tolerance
nit = 1000;  %Number of iterations
[fhat,x_opt_hat] = csminwel(@welfare_objective,x_start,H0,[],crit,nit,x_opt_name);

while the objective is

function [outvalue]=welfare_objective(x_opt,x_opt_name)
% function [outvalue]=welfare_objective(x_opt,x_opt_name)

global oo_ options_ M_

%% set parameter for use in Dynare
for ii=1:size(x_opt_name,1)
    set_param_value(x_opt_name{ii,1},x_opt(ii));
end

if any(x_opt<cell2mat(x_opt_name(:,2))) || any(x_opt>cell2mat(x_opt_name(:,3))) %make sure parameters are inside their bounds
    outvalue=10e6+sum([x_opt].^2); %penalty function
    return
end

var_list_ = char('omega_e');
info = stoch_simul(var_list_); %get decision rules and moments
if info(1) %filter out error code
    outvalue=1e5+sum([x_opt].^2);
    return;
end
outvalue=-oo_.mean(strmatch('omega_e',var_list_,'exact')); %extract Welfare

Dear Prof Pfeifer,

This code works for me, thank you very much!!

FYI, the last report in the iteration is:

Improvement on iteration 4 = 0.000000000
improvement < crit termination
smallest step still improving too slow, reversed gradient
Total computing time : 0h01m59s

Is this a good signal of a ‘good result’? Do you have rule of thumb about whether the results are good or can still be improved (possibly by changing the starting values)?

From what I understand, welfare function can be really steep which possibly makes it difficult to find the optimum values.

Many thanks!!

Dear Prof Pfeifer,

I have followed this post and your reply to obtain the optimal policy parameters that maximise welfare in a Gertler and Karadi model. I made the mod file and m.file work. However, I always get the optimal parameters equal to the initial values, I tried with a different set of numbers and the output is the same.

I wonder if I am missing something or what I should try to improve the results. It seems unlikely to me that the initial values I have chosen are the optimal ones.

this is an example of what I get



f at the beginning of new iteration, 131.1940421736
x = 0.79 2.43 0.16
Norm of dx 0
ih = 1


Improvement on iteration 1 = 0.000000000
improvement < crit termination
zero gradient

Could you give me some advice, please? Many thanks.
I am looking forward to your reply.

I attached the mod file

NK1_BR_GK_CM_mpopt1_steadystate.m (4.5 KB)
NK1_BR_GK_CM_mpopt1.mod (5.9 KB)
welfare_objective.m (753 Bytes)

Prof. Jpfeifer,

I am a new hand at Matlab coding. Do you mean that the lines of maximization code should be put at the end of the mod-file, write a separate m-file which includes the objective function, and just run the mod-file to get the results? If so, what’s the difference if I put the maximization code right before the “stoch_simul” command in the mod-file?

And, what does “x_start” mean in your posted code? How can I set the grid step during the process of optimal parameter value search? For example, I define the range of the parameters to be optimized as 0 to 3, rather than 0 to Inf in your posted code, and I also want to set the grid search step is 0.01 during this range. How can I do it in dynare?

Besides, where are the values of maximized welfare and the optimal parameters stored? Are they stored in the workspace, like the value of maximized welfare is just the negative oo_.mean, and the values of optimal parameters are the corresponding parameter values in the workspace?

I am looking forward to your reply.

Hi Grant,

Yes you can write or paste that code at the end of your mod file but before the simulations. That worked for me.

Best,
Aida

Hi Aida,

Thanks for your help. Do you mean that I should put that code right before the “stoch_simul” command? If so, what’s the difference if I put the maximization code after the “stoch_simul” command?

Besides, where are the values of maximized welfare and the optimal parameters stored?

  1. You should put it after a stoch_simul-command as that one is needed to initialize all the structures used.
  2. The output of the optimizer
    [fhat,x_opt_hat]
    stores the welfare in fhat and the parameter values in x_opt_hat

Dear Prof Pfeifer,

Many thanks for your reply. Can I have other questions please:

  • If i changed the x_start values or lower bound values, sometimes it did not work. The message is: ‘Undefined function or variable ‘par_value_lambda’.’ What could be the problem?
  • Can I put negative numbers in the x_start and/or lower bound values?
  • Sometimes, I get the optimal parameters equal to the initial values. What seems to be the problem here? In order to allow a possibility that the optimal parameters are negative, I tried to lower the x_start and/or lower bound values to negative numbers, but it didn’t work.

Thank you very much.

  1. The first one is due to a bug in the above code that I fixed.
  2. Yes, negative numbers are allowed as long as the bounds are consistent with this
  3. Having optimal parameters equal to the starting values is usually a sign of numerical problems. It’s often hard to know what happens. But often using a different optimizer or different starting values helps.

Thanks for your kind reply Prof Jpfeifer,

I made my code work. writing the code after the stoch_simul command.
I have a similar problem than Ratih, with some starting values it gives me optimized values but with others sets it reports the initial numbers.
At the moment I am playing around with different values but could you suggest a different optimizer, please?
to have in mind an alternative option in case I cannot get optimized values under certain scenarios.
Thank you very much!

The cmaes optimizer seems to work pretty well in practice, but it takes quite long to run. You would need something along the lines of

    %set CMAES options
    H0=0.2*ones(size(x_start,1),1)
    cmaesOptions = options_.cmaes;
    cmaesOptions.LBounds = [-1000;-1000];
    cmaesOptions.UBounds = [1000;1000];
    [x_opt_hat, fhat, COUNTEVAL, STOPFLAG, OUT, BESTEVER] = cmaes('welfare_objective',x_start,H0,cmaesOptions,x_opt_name);
    x_opt_hat=BESTEVER.x;

Dear Prof Jpfeifer,

Thank you for your reply. I have tried cmaes, it solves the optimization problems regardless of the scenario. I believe it is more powerful perhaps under certain circumstances.

I understand that “bestever” is the best result in all trials of the maximization problem whereas fmin is the best in the last generation. It seems to make sense that “bestever” is the best choice, am I right? If so, do you know if there is a case where it would be better to pick fmin? a rule of thumb or criteria, just to know.

Could you kindly advice me how to use “plotcmaesdat.m”, I read we can get the plot of the welfare to show the max point. I have seen I should write cmaesOptions.LogModulo different from zero, but I am not sure how to write that code. Any help would great.

Again, thank you very much for your help!!
Best,

  1. We are almost always interested in the global maximum, so bestever seems appropriate
  2. Regarding plotcmaesdat.m: the Dynare-packaged cmaes.m does not support this feature. You would need to download and use the original cmaes.m.
1 Like

Ok, I understand Prof Jpfeifer. I will try to have a look at cmaes.m for plots then.
Thank you very much for your help and time.
Best,

Prof. Jpfeifer,

Thanks for your help.

  1. Since cmaes is a global optimizer, I want to ask that in your posted code cmaesOptions.LBounds = [-1000;-1000]; cmaesOptions.UBounds = [1000;1000]; do the number of semicolon refer to the number of parameters which need to optimize the welfare in the x_opt_name? Say, if I have 3 parameters in x_opt_name, I should put cmaesOptions.LBounds = [-1000;-1000;-1000]; cmaesOptions.UBounds = [1000;-1000;1000]. Am I right?

  2. Also, do cmaesOptions.LBounds and cmaesOptions.UBounds refer to the upper and lower bound of the parameter range? If so, what’s the difference with the parameter range of 0~inf in your posted code %define parameters to be optimized and their upper and lower bound x_opt_name={'alpha_le',0,Inf 'alpha_qky',0,Inf 'alpha_ve',0,Inf 'alpha_cg',0,Inf }; ?

  3. Moreover, what does x_start mean? Why do you put 0.01 in it? I tried to put the parameter names in it, instead of 0.01, the code still can run.

I am looking forward to your reply.

  1. Yes, the bounds need to be a column vector with the number of rows conforming to the number of estimated parameters.
  2. csminwel does not allow setting bounds for the optimizer. cmaes does. As nobody specified the actual bounds to be used, I just picked some arbitrary numbers. That’s why the two specifications are not equivalent.
  3. x_start is the starting values. It should not work with a string instead of a number.

Prof. Jpfeifer,

Many thanks for your reply.

As you said, only cmaes allow setting bounds for the optimizer, since we have already set the bounds in your posted code

x_opt_name={‘alpha_le’,0,Inf
‘alpha_qky’,0,Inf
‘alpha_ve’,0,Inf
‘alpha_cg’,0,Inf
};

, which is from 0 to infinite, and also have the bounds in

cmaesOptions.LBounds = [-1000;-1000]; 
cmaesOptions.UBounds = [1000;1000]; 

which one should be the bounds that cmaes ues? I am confused about it.
Moreover, if

cmaesOptions.LBounds = [-1000;-1000]; 
cmaesOptions.UBounds = [1000;1000];

is the bounds that cmaes uses, how should I set parameters to be optimized in front of the optimizer in the mod-file?
When I set the parameter names in x_start as x_start=[rho_r, rho_pi, rho_y]', the code indeed do run. It can do work with a string instead of a number. Please see the attached code. Could you please help me check whether there exist some problems?
When I set the bounds as

cmaesOptions.LBounds = [0;1.1;0]; 
cmaesOptions.UBounds = [1;5;3];

, I have the error message:

In an assignment A(:slight_smile: = B, the number of elements in A and B must be the same.
Error in dyn_first_order_solver (line 251)
info(2) = temp’*temp;

Error in stochastic_solvers (line 267)
[dr,info] = dyn_first_order_solver(jacobia_,M_,dr,options_,task);

Error in resol (line 144)
[dr,info] = stochastic_solvers(dr,check_flag,M,options,oo);

Error in conditional_welfare_objective (line 17)
[oo_.dr,info,M_,options_,oo_] = resol(0,M_,options_,oo_); %get decision rules

Error in cmaes (line 948)
fitness.raw(k) = feval(fitfun, arxvalid(:,k), varargin{:});

Error in housing (line 346)
[x_opt_hat, fhat, COUNTEVAL, STOPFLAG, OUT, BESTEVER] = cmaes(‘conditional_welfare_objective’,x_start,H0,cmaesOptions,x_opt_name);

Error in dynare (line 223)
evalin(‘base’,fname) ;`

Could you please help me what the problem is? How to fix it?

Here attached the code. I am looking forward to your reply. conditional_welfare_objective.m (1.1 KB)
housing.mod (8.5 KB)

  1. The code part relating to x_opt_name was not written for use with CMAES. If you keep it in your code, make sure that the bounds set are consistent with the ones in cmaesOptions. How you actually set those bounds depends on your model. I just picked some numbers here as I don’t know the model.
  2. Regarding

This is not a string. The assignment uses the calibrated parameter values stored in e.g. the variable rho_r as the starting value.
3. The error message you get comes from options_.qz_criterium not being set and the solver triggering a case where it is needed. I changed the code at https://github.com/JohannesPfeifer/DSGE_mod/commit/c7e066a275ffc10f51eee27acba789b17d002315 to make it robust against this.

Prof. Jpfeifer,

Thanks for your reply. As you posted that the code part relating to x_opt_name was not written for use with CMAES, but in the code of CMAES, it is written as [x_opt_hat, fhat, COUNTEVAL, STOPFLAG, OUT, BESTEVER] = cmaes(‘welfare_objective’,x_start,H0,cmaesOptions,x_opt_name); which indeed includes the x_opt_name.

So if I don’t put parameter bounds in x_opt_name, how should I define the parameter names and x_opt_name? I have tried x_opt_name={‘rho_r’ ‘rho_pi’ ‘rho_y’}’, it doesn’t work. Could you please help with this question? I am looking forward to your reply.