Some questions on Loop over parameters

Hi,
Recently, I am learning on how to loop over parameters. I searched almost all posts on this topic, but I still encounterd some confusing problems that I can’t find any exact answer in the forum. So I decide to ask for help.
The example is used for OSR (optimal simple rules) analysis. To simplify, I just set two values of spectific parameter (phipi=0 or 1) to examine whether different initial values of parameter will affect the results of the optimal parameters and corresponding welfare loss. I have tried the following three methods:

  1. Repeatly run Dynare (osr1.mod (1.9 KB))
  2. Loop for Dynare block osr(osr2.mod (2.3 KB))
  3. Loop for Matlab rountine oo_.osr(osr3.mod (2.4 KB))

Results of several trials by using these codes:

  1. Results of repeatly run Dynare
    when initial value of phipi is: 0
    optimal param of phipi and phiy : -0.0052839, 0.11621
    welfare loss : 3.8016
    when initial value of phipi is: 1
    optimal param of phipi and phiy : 0.99343, 0.12251
    welfare loss : 2.6148

  2. Results of loop for Dynare block osr (phipi=0 in parameters block)
    when initial value of phipi is: 0
    optimal param of phipi and phiy : -0.0052839, 0.11621
    welfare loss : 3.8016
    when initial value of phipi is: 1
    optimal param of phipi and phiy : 1.0378, 0.11462
    welfare loss : 3.8016

  3. Results of loop for Dynare block osr (phipi=1 in parameters block)
    when initial value of phipi is: 0
    optimal param of phipi and phiy : 0.0027466, 0.15222
    welfare loss : 2.6148
    when initial value of phipi is: 1
    optimal param of phipi and phiy : 0.99343, 0.12251
    welfare loss : 2.6148

  4. Results of loop for Matlab rountine oo_.osr (phipi=0 in parameters block)
    when initial value of phipi is: 0
    optimal param of phipi and phiy : -0.0052839, 0.11621
    welfare loss : 3.8016
    Results of loop for Matlab rountine oo_.osr
    when initial value of phipi is: 1
    optimal param of phipi and phiy : 1.0378, 0.11462
    welfare loss : 3.8016

  5. Results of loop for Matlab rountine oo_.osr (phipi=1 in parameters block)
    when initial value of phipi is: 0
    optimal param of phipi and phiy : 0.0027466, 0.15222
    welfare loss : 2.6148
    Results of loop for Matlab rountine oo_.osr
    when initial value of phipi is: 1
    optimal param of phipi and phiy : 0.99343, 0.12251
    welfare loss : 2.6148

Just change the initial value of phipi in parameters block of osr1 and repeatly run Dynare twice to get result1. Run osr2 twice to get result2 and result3. Run osr3 twice to get result4 and result5. Now my questions are as follows:

  1. About the concept of OSR command. Though I have used the global optimizer (opt_algo=9) and seed command [set_dynare_seed(‘default’)], while whichever method I choose, different initial values of parameter phipi lead to different results of OSR parameters and welfare loss. Why? Doesn’t global optimal mean the same optimal parameters and welfare loss? Or does it just mean the same minimum welfare loss?

  2. About the concept of ‘loop over parameters’. I know looping over parameters is efficient. I originally think the efficiency just mean less time and more convenient for comparation, but it seems the results are also different (compare result1 with result(2-5)). What is the rationale causing the different results between repeatly run Dynare (method1) and loop for Matlab rountine oo_.osr(method3)? Can I simply use the results of repeatly run Dynare?

  3. Different values in parameters block affect the results. Look at the two loop codes (ors2 or ors3), when manually change the value of phipi in parameters block, the results turn out to be different. Since I have used set_param_value command to reset the initial value of parameter phipi, why that happens?

  4. What’s the difference between ors2 and osr3. Why the results are same when setting identical value in parameters block (result2=result4, result3=result5). But when I delete the seed command, they turn out to be different. I just follow some examples about stoch_simul to set ors2, and I guess it may be same with repeatly run Dynare. But it’s not the case in my example, so I am not sure if ors2 is right written.

These questions have been confusing me for quite long time. I would be very grateful for any help. Thank you in advance.

Best,

Kairey

  1. The third attachment is broken.
  2. Even global optimizers are not assured to find the global optimum. Starting values tend to matter.
  3. CMAES depends on random numbers, introducing one potential element of randomness. Dynare upon startup reset the random number generator seed. That may explain differences relative to looping over commands.
  4. Looping over commands instead of all of Dynare is dangerous when not treating parameter dependence correctly, which seems to be the case in your mod-file.
  1. Sorry for my fault, I repost the third codeosr3.mod (2.5 KB)

  2. What does the global optimum mean? Does it just mean the minimum welfare loss? Since starting values matter, how can we ensure to get the minimum welfare loss?

  3. Do you mean values in parameters block matter the initialization? Here to explain my understanding. In osr3.mod, when I change the initial value of phipi in parameters block (line:13), since I use osr commond (line:57) to parse the model before the loop, the random numbers have already changed, so even I subsequently use the commands of set_param_value (line:69) and set_dynare_seed (line:72), it still doesn’t help, thus turn out the different results, is it right? If so, similar to question2, since values in parameters block matter the results, how can we ensure to get the minimum welfare loss and the corresponding OSR parameters?

  4. When I want to do welfare analysis, since loop for Dynare (method1) is not a real loop, and loop for Dynare command (method2) is dangerous, so loop for Matlab rountine (method3) is more desirable, right?

Looking forward to your reply. Tanks again.

Best,

Kairey

  1. Yes, it means the minimum welfare loss. You should use different starting values and check whether you consistently get the same result. Otherwise, pick the solution with the smallesst loss.
  2. I am not sure I understand the question here. The problem is still to find the optimum. If you are having trouble with starting values and optimizers, then the grid search may be the best way as your problem seems small-scale.
  3. I always prefer method 3, but it should be done right. To take parameter dependence into accout
rrho = (1/cbeta-1);  
rMU = cepsilon_f/(cepsilon_f-1);  
rmu = log(rMU);              
rTHETA = (1-calpha)/(1-calpha+calpha*cepsilon_f);      
rlambda = (((1-comega)*(1-cbeta*comega))/comega)*rTHETA; 
rupsilonyn = -((1-calpha)*rmu-log(1-calpha))/(cgamma*(1-calpha)+cnu+calpha);
rpsiyna = (1+cnu)/(cgamma*(1-calpha)+cnu+calpha);  
rkappa = rlambda*(cgamma+((cnu+calpha)/(1-calpha))); 
rLAMBDAa = 1/((1-cbeta*crhoa)*((cgamma*(1-crhoa)+phiy))+rkappa*(phipi-crhoa));

should be model local variables or in a steady_state_model?-block

Dear professor Pfeifer,

Following your suggestion, I reset these independent parameters your mentioned as model-local variables (# operate), and delete them in parameters-block, then re-try the above five scenarios. Now I find the results of the above five scenarios are same:

when initial value of phipi is: 0
optimal param of phipi and phiy : -9.4651, -14.7961
welfare loss : 3.1784
Results of loop for Matlab rountine oo_.osr
when initial value of phipi is: 1
optimal param of phipi and phiy : 1.096, 0.10468
welfare loss : 2.5428

My questions are:

  1. So in my first post I get different results of different scenarios is just because I ignored the parameters dependence, right?
  2. In fact, the three methods above are same, right? That is, if I correctly handled the parameters dependence problem, they can trun out same results, right?
  3. Since starting values tend to matter the results, it’s desirable to make use of grid search to find out the minium welfare and the corresponding optimal parameters, right?

Here I post the corrected three codes:
osr1_local.mod (2.1 KB)
osr2_local.mod (2.5 KB)
osr3_local.mod (2.6 KB)

Looking forward to your reply. Thanks very much.

Sincerely,

Kairey

Hi,

I checked another model which is nonlinear. The parameter (phi) I selected for loop does not affect other parameters, so I think I needn’t to consider the parameters dependence problem, but I found the results of loop for Dynare and loop for Matlab routine are also different. I have no idea what’s wrong.

Here I post my codemodel_para.mod (11.7 KB)loop_for_Dynare.m (668 Bytes)
loop_for_matlab_routine.m (828 Bytes).

Just run the two matlab file, and you can get the two results:

  1. Results of loop for Dynare
    optimal param of variance.para(i) : 0.00067505 0.00088737 0.0011686 0.0018001
    optimal param of mean.para(i) : 2.2366 2.2856 2.3132 2.3357
  2. Results of loop for Matlab routine
    optimal param of variance.para(i) : 0.00067505 0.00090844 0.0011921 0.0018354
    optimal param of mean.para(i) : 2.2366 2.0995 2.1265 2.1556

I have no idea. I am really confused. Could you please spare little time to give me some explainations for my latest two posts. Thanks very much.

Best,

Kairey

  1. Yes, the parameter dependence is most probably the reason.
  2. Yes, if handled correctly, all three methods should return the same results (at least if seeds are handled correctly)
  3. That depends. Grid search may only be feasible for low-dimensional problems,

The part

       set_param_value('phi',para_rigid(i));
       save parameterfile phi;

is wrong. There is no variable phi defined (unless you ran Dynare earlier)

  1. What do low-dimensional problems mean?

I think I have defined phi by using ’para_rigid = 0.6:0.1:0.9’,‘set_param_value(‘phi’,para_rigid(i))’ and ‘save parameterfile phi’ commands in matlab file, didn’t I? To insure calling the corresponding value of phi in every loop, I wrote the ‘load parameterfile’ and ‘set_param_value(‘phi’,phi)’ command in Dynare file in the last post called model_para.mod (line: 59-60).

A more time-consuming way is to manually change the value of phi, and repeatly run Dynare. To check, I just wrote the loop command inside the Dynare file, now I don’t need these commands I mentioned before to define and call the value of phi in every loop.

For the first codemodel_no_loop.mod (12.0 KB), Just manually change the value of phi from 0.6 to 0.9(line: 42), and repeatly run Dynare, then you can get the results:

Results of repeatly run Dynare when phi is : 0.6
optimal param of variance.para : 0.00067505
optimal param of mean.para : 2.2366

Results of repeatly run Dynare when phi is : 0.7
optimal param of variance.para : 0.00088737
optimal param of mean.para : 2.2856

Results of repeatly run Dynare when phi is : 0.8
optimal param of variance.para : 0.0011686
optimal param of mean.para : 2.3132

Results of repeatly run Dynare when phi is : 0.9
optimal param of variance.para : 0.0018001
optimal param of mean.para : 2.3357

For the second codemodel_loop.mod (12.3 KB), you can get the rusults:
Results of loop for Matlab routine
optimal param of variance.para(i) : 0.00067505 0.00090844 0.0011921 0.0018354
optimal param of mean.para(i) : 2.2366 2.0995 2.1265 2.1556

You can see they are exactly the same with the results of my last post, so I think I have correctly defined the phi for loop. But what confusing me is still the different results from the two methods.

In this model, the parameter phi will not affect other parameters, so I think I don’t need to consider the parameter dependence problem, but the results are still different. May the reason be the ‘initval’-block or the high order (order=2)?

Thanks in advance.

Best,

Kairey

will set the entry of M_.params, not a variable called phi

  1. My guess is that it’s about the seed. Dynare will internally use random numbers for the simulations and IRFs. What happens when you use theoretical moments?

Hi,

I had fixed the seed in model_loop.mod (line: 357). Since Dynare will internally use random numbers, so I didn’t add ‘set_dynare_seed(‘default’)’ command in model_no_loop.mod, when I add that command, there has no any change.

To get theoretical moments, I set periods=0 (line:352), and no other changes.

  1. Results of repeatly run Dynare:
    Results of repeatly run Dynare when phi is : 0.6
    optimal param of variance.para : 0.00061247
    optimal param of mean.para : 15.8176

    Results of repeatly run Dynare when phi is : 0.7
    optimal param of variance.para : 0.00080193
    optimal param of mean.para : 16.395

    Results of repeatly run Dynare when phi is : 0.8
    optimal param of variance.para : 0.0010257
    optimal param of mean.para : 16.9167

    Results of repeatly run Dynare when phi is : 0.9
    optimal param of variance.para : 0.0014101
    optimal param of mean.para : 17.2463

  2. Results of loop for Matlab routine:
    optimal param of variance.para(i) : 0.00061247 0.00080087 0.0010237 0.0014004
    optimal param of mean.para(i) : 15.8176 44.7419 46.1858 47.1682

Yor can see the results are still different, I don’t know what’s the real reason causing the difference.

There are two issues here:

  1. phi is entering your steady state computations for the initial values. If you run a loop without updating initval, Dynare converges to a different steady state. Given the presence of multiple steady states, you should use an analytical steady state to select the correct/intended one.
  2. Your error handling is wrong. In Dynare 5, it should be
stoch_simul(order=2, irf=0, ar=0, nofunctions, hp_filter=1600);

para_rigid = 0.8:0.1:0.9;
for i=1:length(para_rigid)
    set_param_value('phi',para_rigid(i));
    set_dynare_seed('default');
    [info, oo_] = stoch_simul(M_, options_, oo_, var_list_);
    if info(1)
       fprintf('Here there is an error with this combination of paramters!\n');    
    else
       para_pos=strmatch('ye',M_.endo_names,'exact');
       variance.para(i)=oo_.var(para_pos,para_pos);
       mean.para(i)=oo_.mean(para_pos);      
    end
end
    disp(['Results of loop for Matlab routine'])
    disp(['optimal param of variance.para(i) : ' num2str(variance.para)]);
    disp(['optimal param of mean.para(i) : ' num2str(mean.para)]);

That is, only save the output if the model could be solved.

1 Like

Thank you very much for your kind and prompt reply.

Do you mean a loop will not update the value of dependent variables in initval block? To avoid this problem, I change the parameter phi in initval block to specific value like 0.4(line: 316-319), now phi just affects some model equations in model block. Here are the results:

  1. Results of repeatly run Dynaremodel_no_loop2.mod (12.0 KB):
    Results of repeatly run Dynare when phi is : 0.6
    optimal param of variance.para : 0.00067505
    optimal param of mean.para : 2.2366
    Results of repeatly run Dynare when phi is : 0.7
    optimal param of variance.para : 0.00088737
    optimal param of mean.para : 2.2856
    Results of repeatly run Dynare when phi is : 0.8
    optimal param of variance.para : 0.0011686
    optimal param of mean.para : 2.3132
    Results of repeatly run Dynare when phi is : 0.9
    optimal param of variance.para : 0.0018001
    optimal param of mean.para : 2.3357

  2. Results of loop for Matlab routinemodel_loop2.mod (12.3 KB)
    optimal param of variance.para(i) : 0.00067505 0.00090844 0.0011921 0.0018354
    optimal param of mean.para(i) : 2.2366 2.0995 2.1265 2.1556

you can see the results of the two methods are still different, though there seems no parameter dependence problem in both codes. My questions are:

  1. If a model exists multiple steady states, will the results of different methods still be different though there is no parameter dependence problem?

  2. To solve the problem, Is the only way that I should solve the analytical steady state and use a steady_state_model block instead initval block?

  3. If it’s hard to insure the same results, which method should I choose? Is repeatly run Dynare more desirable and believable? Can I believe the results of loop for Matlab routine?

Looking forward to your reply. Thanks again.

Best,

Kairey

Numerical solvers for the steady state always depend on initial conditions. You could try to fix that condition to get the same results. But that would still not answer which of the two steady states is the economically correct one. This is not something that has a technical solution.