3rd order approximations for large-scale models but "the dynamic derivatives matrix is too large"

Hi all!

I am currently performing stochastic simulations for a large-scale model with 37 sectors and I need 3rd order approximations to capture the uncertainty shocks.

The model features:


  Number of variables:         2520
  Number of stochastic shocks: 814
  Number of state variables:   814
  Number of jumpers:           1
  Number of static variables:  1705

However, when running the 3rd order approximations of the following:


An error message shows up and informs me to reduce the order of approximations:

Starting Dynare (version 5.1).
Calling Dynare with arguments: none
Starting preprocessing of the model file ...
Substitution of exo lags: added 740 auxiliary variables and equations.
Found 2520 equation(s).
Evaluating expressions...done
Computing static model derivatives (order 1).
ERROR: The dynamic derivatives matrix is too large. Please decrease the approximation order.

It instructs me to decrease the approximation order, which would not work for me as 3rd order is required to generate GIRFs. I still choose to experiment with order=2, and it could be executed (albeit for a long period of time). I wonder is there a way for me to work around this for order=3? Is this generally doable (e.g. using “use_dll”) ?

Thanks in advance!

No, that will not work. At order=3, you will encounter matrices with 2520^3 columns, which is simply too big.

Dear ‪Johannes,

Thanks for the reply. I have managed to reduce the equation to roughly 500ish with roughly:

  Number of variables:         500
  Number of stochastic shocks: 150

Does a 3rd order approximation sound doable to you for this “reduced system”? Would it work in Dynare++ or in MATLAB in general with other derivative tricks?

On a side note, I also ran order=3 on a model with 8 sectors, which features:


  Number of variables:         175
  Number of stochastic shocks: 60
  Number of state variables:   60
  Number of jumpers:           1
  Number of static variables:  114

and the total computing time is 2 hours.

I haven’t done the math, but the answer will depend on your computer’s memory. You can only try. The issue is that the matrix sizes scale with an exponent equal to the approximation order. The relevant matrices you need to compute are objects g_{xxx} and g_{xuu}. You can have a look at the description at

The relevant number of variables is n_z=M_.nspred + M_.exo_nbr and the number of matrix entries 𝑛_𝑧(𝑛_𝑧+1)(𝑛_𝑧+2)/6. This results from only storing unique entries. I doubt you can be a lot more efficient.