Rerun OSR to check for optimality or local extremum

Dear all,

I am a bit confused by the documentation of the osr command.
My intention is it to check how different OSR’s behave in the Branch and McGough (2009) model.

I would like to iterate over one parameter \alpha (=share of rational agents) and compute the OSR each time and store the OSR parameters.

Before storing the OSR parameters, I would like to check whether the parameters are sensible (= the true minimizers). So essentially I would like to rerun for different starting values, or something like that (in order to rule out that the optimization algorithm is stuck in a strange local extremum).

How can I do this?
My mod file without this rerunning process is attached. osr_parameters_HE.mod (2.7 KB)
For my taste the obtained results look strange, because the obtained OSR parameters are so huge.

I appreciate any suggestions.

Best regards
Max

  1. You are not running a proper loop. Please have a look at

and use set_param_value to update your parameter a.
2. It is not uncommon that the optimal parameters are extremely large. See

  1. Within your loop you can add another loop that draws random starting values.

@jpfeifer
Prof. Pfeifer, thank you very much for your excellent support.

I have a couple of questions.

  1. What does it mean to run an improper loop? What classifies my for loop as an improper loop? Are there any dangers of using an improper loop the way I did in the code above?
    In the dynare.pdf I only found a brief discussion (MATLAB/Octave loops versus macro processor loops p 136-137) about loops which does not cover my questions.

  2. Is there a reason why you use: options_.osr.opt_algo=9; and not the default Sims algo (opt_algo=4) ?

  3. What is this line doing?
    oo_.osr = osr(var_list_,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights);
    I think var_list_ has to be replaced by M_.endo_names.

  4. What is the variable oo_.osr.error_indicator telling me? Can I extract information about indeterminacy and instability from this variable? If yes, how?

  5. Most important: Is your code under 1. robust when Blanchard Kahn (1980) is not satisfied? Will it further try to compute optimal parameters and variances if for a specific value on the grid of the for loop the BK condition is violated? Or will it cancel all computations once BK is violated for the first time?

I think after playing around with dynare my questions (2., 4., 6.) are already answered.
Hopefully this mod file is more appropriate than my previous one osr_Branch_McGough.mod (4.4 KB)
W.r.t question 3. I observed that results are bad under options_.osr.opt_algo=9; in line 66 of the mod file.

I have to add a serious question: Why do my results change when I change the initial calibration of the Taylor rule to be optimized? For example, changing rho_TR = 1; to rho_TR = 0; in line 22 of my mod file yields different values for the optimal paramters. Are these values used as starting points of the optimization algorithm?

Thank you very much for this advice.

Unfortunately, I do not know how to change/define the starting values handed over to the osr command.
Could you support me with a piece of code?
Which optimization routine would you recommend for OSR’s?
options_.osr.opt_algo=?;

Thank you in advance.

Dear all (and happy new year, by the way),

could someone with sound experience of the osr command answer my last questions please?
For me the dynare.pdf is not sufficient to answer the questions 7., 8. and the question from the last post.

I appreciate any hints.

  1. Your loop is improper because i) you are looping over the Dynare syntax of the command, which is extremely error prone;(you should use the functional command syntax that I outline above) and ii) your should be using the set_param_value command to update parameters (what you did above by assigning a value to the parameter name directly in the loop does not assure correct updating)
  2. opt_algo=4 is a Newton-based optimizer and therefore local by construction. opt_algo=9 in contrast is a global optimizer that should work better regardless of the starting values for the parameters
  3. The line oo_.osr = osr(var_list_,M_.osr.param_names,M_.osr.variable_indices,M_.osr.variable_weights); is the functional syntax of calling the osr-command. It is Matlab-code that can be put in an arbitrary Matlab-loop. The var_list_-variable stores that names of the variables usually put after the osr-command. If it does not yet exist, then use M_.endo_names to display results for all endogenous variables.
  4. The oo_.osr.error_indicator contains the error code from the OSR optimization. It will be equal to 1 whenever a problem was encountered (like never finding a vector that satisfies the BK conditions)
  5. Dynare will stop computations if for the initial parameter values the OSR parameters the model cannot be solved - unless the noprint-option is set. In that case, osr will continue, but given that no explicit error message will be shown, you will need to check the error-code whether there was a problem at a given parameter value on the grid.
  6. The osr-optimizer always takes the last calibrated parameter value stored in M_.params as its starting value. That explains why you experience different results. You can use the set_param_value-command in your loop to try different starting values.
  7. I tend to prefer global optimizers like 8 or 9. They usually take longer, but are more robust to the starting values. When you say that with 9 your results are poor, do you mean the OSR parameter values are large? That is common and indicates you need to set bounds.
1 Like

Thank you very much. I highly appreciate your answer.

Thanks for all the useful hints.

I wrote a simple piece of code using your hints.

My selection of alternative starting values is as follows:

  1. Use baseline parameters from the literature.
  2. Call osr and store the candidate parameters and the loss as initial guess.
  3. Run a loop:
  4. Use candidate parameters + \varepsilon where \varepsilon \sim \textrm{N}(0,\sigma^2=25) as new starting values for the optimizer.
  5. If new loss < candidate loss : overwrite candidate parameters and loss, then return to 4.

Any critique on this approach is welcome.
One may draw alternative starting values from a continuous Uniform distribution, directly.
Or use a grid of starting values which becomes time consuming.

I have a question w.r.t options_.osr.opt_algo=4;

If I run this code osr_algo_4.mod (4.4 KB)
multiple times, I obtain the same loss value but always different optimal parameter values.
I set a seed in line 66 but even though optimal parameters are different after rerunning the code.
E.g. calling dynare twice gives:

Start param      : 0.95308, 1.3824, 1.1585
Start loss       : 0.77763
End algo_4 param : 21.8645, 0.16156, 2.4589
End algo_4 loss  : 0.74877

Start param      : 0.95308, 1.3824, 1.1585
Start loss       : 0.77763
End algo_4 param : 16.1344, 0.15705, 1.7979
End algo_4 loss  : 0.74877

I expected to obtain the same result, because I execute the same code.
Any idea what is wrong?

A horse race osr_algo_4_vs_9.mod (6.3 KB)
between osr.opt_algo = 4 and 9 indicates even greater differences in parameters for the same loss.

Start param      : 0.95308, 1.3824, 1.1585
Start loss       : 0.77763
End algo_4 param : 4.7424, 0.14814, 0.48359
End algo_4 loss  : 0.74877
End algo_9 param : 53.6718, 0.1865, 6.1285
End algo_9 loss  : 0.74877

Best regards,
Max

  1. In Matlab I cannot reproduce the first issue. Here, the seed works as expected and results are always the same. However, the downloaded file required different quotation signs in the seed command.
  2. The loss is actually not the same. If you look at the further digits, you will see slight differences. My hunch is that the first large parameter essentially already fully stabilizes the economy and all other values hardly do anything. That is, the objective function around the optimum is very flat.

Thank you for the hints.

I have another observation which indicates inconsistent behavior in Octave.

If I use the first code osr_algo_4.mod and set Num_of_reps = 10; I obtain the loss
Loss_algo_4 = 0.748774647775558

If I set Num_of_reps = 25; I obtain
Loss_algo_4 = 0.748774647858677

The later loss is larger!
Since the first 10 noise elements are the same for both runs, the algorithm should have stored the first result after iteration 10 in both cases!

Why is the code comming up with a worse result after the additional 15 iterations?

That is implausible!

Can you reproduce the issue in Matlab?

No, I cannot reproduce the issue in Matlab. Which Octave version and OS are you using.

Dynare 4.5.1 ; Octave 4.2.1 and Windows 10 .

How lager are the losses computed by Matlab?

Are there huge discrepancies to the numbers I posted?

Matlab’s number after 15 repetitions is about 0.74877464777496, i.e. in the same ballpark