Dear team,
I’m running a perfect foresight simulation in a large-scale model with many shocks. Since the solver converges only with fairly close initial values, the work-flow is very path dependent.
My issue is related to this section in the perfect_forsight_solver function:
% At the first iteration, use the initial guess given by
% perfect_foresight_setup or the user (but only if new_share==shareorig, otherwise it
% does not make much sense). Afterwards, until a converging iteration has been obtained,
% use the rescaled terminal condition (or, if there is no lead, the base
% scenario / initial steady state).
if completed_share == 0
if iteration == 1 && new_share == shareorig
% Nothing to do, at this point endo_simul(:, simperiods) == endoorig(:, simperiods)
elseif M_.maximum_lead > 0
endo_simul(:, simperiods) = repmat(endo_simul(:, lastperiods(1)), 1, options_.periods);
else
endo_simul(:, simperiods) = endobase(:, simperiods);
end
end
There may be use cases where endo_simul shouldn’t be updated even if new_share ~= shareorig.
For example, suppose you run a large shock simulation and the homotopy method fails at a share of 0.82. Suppose you modify the model slightly or change some of the shocks, it would be great to be able to continue where you left off by using the previous endo_simul as the initial values together with the option homotopy_initial_step_size=0.82. Currently, the function will overwrite endo_simul provided by the user in this case.
Another use case is to scale a selection of shocks after a failed or completed perfect foresight simulation. Here the joint use of options homotopy_initial_step_size and homotopy_exclude_varexo with a previously generated endo_simul would allow for an efficient work flow as the exact intial values of the first homotopy iteration could be calculated.
My current work-around is to simply remove the new_share == shareorig condition above but this may have unintended consequences that I’m not aware of.
Any advice would be appreciated. Thank you!
Best,
Christian