Simulated annealing block

Hi,
Thanks for having further improved the suit of optimization codes.
As for simulated annealing, I wonder whether developers could do the following

  1. allow to override the upper and lower bounds currently set automatically. For example, what I’ve done in my version of Dynare in dynare_minimize_objective.m, row 122 is
[LB, UB]=set_bounds_to_finite_values(bounds, options_.huge_number);
if ~isempty(sa_options.UB)
    UB=min(sa_options.UB,UB);
end
    if ~isempty(sa_options.LB)
    LB=max(sa_options.LB,LB);
end

where i’ve included new options in options_.saopt, e.g.

options_.saopt.UB=5*ones(25,1);
options_.saopt.LB=zeros(25,1);
options_.saopt.LB(9:10)=1.01;

  1. Follow simulated_annealing automatically by a gradient based method. This is pretty useful in my experience to take advantage of the MCMC-like search of simulating_annealing and the precision of the gradient-based method (which could help in generating positive-definite Hessians. E.g.

[opt_par_values, fval,exitflag, n_accepted_draws, n_total_draws, n_out_of_bounds_draws, t, vm] =…
simulated_annealing(objective_function,start_par_value,sa_options,LB,UB,varargin{:});
%%% added by Giovanni Lombardo 28 November 2017
% Set default options.
H0 = 1e-4*eye(n_params);
crit = options_.csminwel.tolerance.f;
nit = options_.csminwel.maxiter;
numgrad = options_.gradient_method;
epsilon = options_.gradient_epsilon;
Verbose = options_.csminwel.verbosity;
Save_files = options_.csminwel.Save_files;
% Change some options.
if ~isempty(options_.optim_opt)
options_list = read_key_value_string(options_.optim_opt);
for i=1:rows(options_list)
switch options_list{i,1}
case ‘MaxIter’
nit = options_list{i,2};
case ‘InitialInverseHessian’
H0 = eval(options_list{i,2});
case ‘TolFun’
crit = options_list{i,2};
case ‘NumgradAlgorithm’
numgrad = options_list{i,2};
case ‘NumgradEpsilon’
epsilon = options_list{i,2};
case ‘verbosity’
Verbose = options_list{i,2};
case ‘SaveFiles’
Save_files = options_list{i,2};
otherwise
warning([‘csminwel: Unknown option (’ options_list{i,1} ‘)!’])
end
end
end
if options_.silent_optimizer
Save_files = 0;
Verbose = 0;
end
% Set flag for analytical gradient.
if options_.analytic_derivation
analytic_grad=1;
else
analytic_grad=[];
end
% Call csminwell.
[fval,opt_par_values,grad,inverse_hessian_mat,itct,fcount,exitflag] = …
csminwel1(objective_function, opt_par_values, H0, analytic_grad, crit, nit, numgrad, epsilon, Verbose, Save_files, varargin{:});
hessian_mat=inv(inverse_hessian_mat);

Best

Giannidynare_minimize_objective.m (25.5 KB)

Hi Gianni,
the first part is actually redundant. You can specify prior truncation for the mode-finding in the estimated_params-block. It will do exactly the same, i.e. restrict the bounds for mode-finding, but will not be in effect during the MCMC.
Regarding running mode-finders in sequence, I opened a ticket at https://github.com/DynareTeam/dynare/issues/1573

Thanks! I didn’t realize that they were not active during MCMC.
Cheers
Gianni

Quote from the manual on LOWER_BOUND

Specifies a lower bound for the parameter value in maximum likelihood estimation.
In a Bayesian estimation context, sets a lower bound only effective while maximizing
the posterior kernel. This lower bound does not modify the shape of the prior
density, and is only aimed at helping the optimizer in identifying the posterior
mode (no consequences for the MCMC).