Large-scale model - 2nd order approximation


I have a fairly large model, which I am currently using in AIM using a 1st order approximation. On my machine, it takes about 3 seconds to get IRFs.
The model has about 3’000 equations and roughly 100 state variables.
Before coding up the model in dynare, I’d like to know:
Is there any chance that dynare can produce a 2nd order approximation to that model without the computer running out of memory? I’d like to do a welfare analysis and for that, 2nd order would come in handy.
I have already solved for the steady state by hand, so this should not be an issue.

Thanks for sharing your experience and thoughts on this.


it should work. If it doesn’t contact me.



Thanks, Michel.

I have coded up a partial equilibrium version of my model to get a sense of the time it takes to compute a second-order approximation. I compare it to a version of my AIM code. The two codes do not feature the exact same equations because in the dynare code, I have removed some static variables by combining some equations (surprisingly, this does not always seem to improve the performance; not sure why). In the dynare code I have the following variables

Number of variables: 150
Number of stochastic shocks: 30
Number of state variables: 60
Number of jumpers: 30
Number of static variables: 60

Here are my computation time results (it’s running on a slow machine; my faster maching is typically 4x faster):

AIM: 2.5s
Dynare (1st order): 11s
Dynare (2nd order, replic=1): 19m51s

I fear that scaling up the model even further would just make it too slow if e.g. I want to estimate 3-4 parameters. Or is there any hope?


That depends on what you are actually trying to do. A lot of what Dynare does initially is overhead (computing analytical derivatives). Once that is done, evaluating them often works fast. So the comparison may be misleading. Estimation may involve the overhead just once.

Thank for your response.

Ok, so I tried a model that’s somewhat closer to what I eventually want to do. On my fast machine, it takes 1h47 for a 2nd order approximation and 50 stochastic simulations.
So my best guess is that the ‘final’ model will probably take about 4h. Is there a way to speed things up in a simple way? Does it help to substitute out static variables?

I’d like to do 2 things:
Estimate 3-4 parameters.
Get welfare results.

To estimate the model, I would like to recover TFP shock innovations to match a time series of output. That is, I want to ask the model to solve for the TFP shock innovations such that the model perfectly reproduces the time series for output. Given this particular draw of innovations, I’d like to estimate a few parameters, e.g. the persistence of the TFP process. How would I do that?

So I guess this type of estimation should be a matter of seconds once the analytical derivatives have been taken?

Thank you,

One additional remark: The code that takes 1h47 for 50 stochastic simulations runs for close to 22h for 1’000 simulations. I guess I don’t quite understand why it takes so long to evaluate the model. So it seems like solving the model takes 45min and 50 simulations take 1h…

How are you conducting the stochastic simulations? Which commands are you invoking?

I have to admit that I am new to dynare, so maybe it is not correct. Here’s the command

stoch_simul(order=2, hp_filter=1600, irf_shocks=(sh_H1), replic=1000, pruning, nograph, nocorr, nodecomposition, nofunctions, nomoments);

sh_H1 is the shock to country H1. I have several countries in the model, but for now they are all symmetric so I only look at the IRF to country #1.

If your model is big, this will not be a good way of running replications. Please describe as precisely as possible what you want to do. Like, take a parameter vector, get the model solution, run 1 stochastic simulation with length x periods, store this run to the disc etc.

Ok, so let’s forget about estimation. Let me just focus on evaluating welfare.

I would like to see how changing one parameter affects welfare in the economy. There might be different ways of doing this, but I thought of the following approach:

  1. Get the model solution for a generic set of parameters.
  2. Plug in a parameter vector.
  3. Simulate the economy for 100 periods and record the average welfare over these 100 periods.
  4. Run as many simulations as needed to get accurate welfare results. Calculate average welfare across all simulations.
  5. Re-do steps 3-4 using a new parameter vector.
  6. Compare the two welfare figures.

I would never go for a simulation in this case, but work with the theoretical objects. See e.g.