Strategies for Solving Large Scale Models

Hello all,

I’m currently solving a large model and would really appreciate advice about the best way to handle it. The model has around 60 variables, 2500 parameters, and fairly complicated expressions. I would like to compute a second order, or ideally even a third order, approximation of the model.

The main obstacle seems to be the way Dynare computes the derivatives of the equilibrium conditions. When I attempt to compute a second order approximation, I get the error message

[quote] Error using feval
The current workspace already has too many variables; there is no room for “T854468”.

Error in stochastic_solvers (line 107)
[junk,jacobia_,hessian1] = feval([M_.fname ‘_dynamic’],z(iyr0),…

Error in resol (line 141)
[dr,info] = stochastic_solvers(dr,check_flag,M,options,oo);

Error in stoch_simul (line 82)
[oo_.dr,info,M_,options_,oo_] = resol(0,M_,options_,oo_);

Error in aggregateDynamics (line 8010)
info = stoch_simul(var_list_);

Error in dynare (line 223)
evalin(‘base’,fname) ; [/quote]

The error comes after preprocessing is completed and the steady state is successfully computed using an external _steadystate.m file. So, I think what must be going on is that the preprocessor creates a lot of temporary terms as part of the differentiation routine, and when Matlab attempts to evaluate _dynamic.m file it runs out of room in the workspace.

I can compute a first order approximation of the model in about 30 seconds (about 16 of them are for the steady state). I can also compute a second order approximation of a smaller version of the model in about 50 seconds including the steady state. So I think that if I can just successfully evaluate the derivatives in Matlab solving for a second order approximation of the full model must be feasible.

If anyone has other ideas about how best to proceed I would really appreciate it! I’m hoping that someone else has dealt with a large-scale model like this and can put me on the right path so that I won’t have to waste time trying a bunch of hopeless things. Also any feedback on what I have already tried (listed below) would be useful.

(1) I can successfully compute a second order expansion of the model with the use_dll option to compile the model files instead of using the Matlab _dynamic.m version. However, it takes 2 hours to perform, so is obviously not ideal. I understand that this option can increase run time, especially for large models, but the fact that it takes so much longer than the second order approximation of the smaller version of the model (again, 50 seconds) may indicate I’m doing something incorrectly.

(2) I tried using the “notmpterms” option when invoking Dynare, but the code hasn’t stopped running in over four hours. Given that use_dll will finish in half the time, this does not seem competitive.

(3) I can potentially port everything over to Dynare++, but I would prefer not to do this because Dynare++ syntax is much more restrictive than Dynare and would require me to use a lot of workarounds. Is it clear that Dynare++ would even be advantageous in this case, i.e., have a run time significantly slower than two hours?

Could you please provide me with the codes to look into this? Email is fine.

Hi,

I had a similar question, so I am reactivating this thread. I have a large model with 100-1000 variables, created by a macro loop. I do perfect foresight and Ramsey.

I can scale the model up and down with the variable N. The code has the following structure, so N increases both the nr of variables and the length of expressions.


@#for j in 1:N
    g@{j} =   
    @#for i in 1:N
        +g@{i}*...
    @#endfor
@#endfor

It solves the dynamics in under 2 minutes for 120 variables (N=25), in just above 2 minutes for 130 variables (N=27), but it doesnt finish any more for 140 (N=29) (in reasonable time, i.e. hours). In particular, it gets stuck / progresses incredibly slow in the first iteration of solving for the dynamics. It seems that the evaluation of dynamic_g1.m and dynamic_g1_tt.m all of a sudden becomes much more time consuming as the number of variables and, with it, the length of the expressions, increases. (The SS still works.)

I used Matlab 2023a and Dynare 5.4. I noted that the problem is even worse for Dynare 6. Though matlab uses much memory, the windows task manager shows that free memory remains at all times. Check and model diagnostics are fine. Changing for a PC with more ram didnt help, but changing for a Mac with more ram did help (but the latter is not a workable long term solution for me).

Would you have any idea what could be behind this computational “cliff edge effect”? Is there a maximum character lengths for m files in windows, or sth like that?

Thanks a lot! Dominik

Can you provide me with the file to investigate the issue?

We confirmed with Mathworks that the speed loss is due to unexpected behavior in their JIT compiler. One can use undocumented features to resolve the issue. But the easiest workaround is to use the bytecode option for now.