Too slow speed of estimation

Dear all,

I’m estimating a multi-sectoral model with 15 sectors on Dynare. To do so, I simulate 102 periods of this model via a first program (here Simulation_15.mod) and then I estimte the model by using sectoral output and sectoral intermediate inputs as observables. The model is thus quite large (almost 900 equations) and even if it worked really well with 3 sectors, with 15 I can’t even get to the MCMH procedure in 6 hours. Yet, my computer is quite good.

Is there a means to make it faster ? Is it possible to optimize by using another kind of estimation routine ? Or should I find another computer to run it for me ?

You can find the mod files and the necessary excel spreadsheet attached.

Thank you very much for your help !

Best regards,

Côme
Estimation_15.mod (7.6 KB)
Estimation_15_steadystate.m (4.5 KB)
Gamma.csv (1.1 KB)
Param.csv (305 Bytes)
Simulation_15.mod (7.8 KB)
Simulation_15_steadystate.m (4.0 KB)

I cannot run your files due to Leontieffbis missing.

Sure, sorry Professor, here is the file. For the script after the stock_simul command in the simulation program, you can comment them, it’s not useful for this issue !
Leontieffbis.m (202 Bytes)

There are still files missing like Conssimple

I give you all the files but I think if you comment the script after the stoch_simul you won’t need it !

Thanks a lot Professor

Conssimple.m (287 Bytes)
Input_bis.m (310 Bytes)
Input_one.m (453 Bytes)
Input_tot.m (434 Bytes)
Inputter.m (206 Bytes)

Please provide a zip-file with all the necessary files. For example, OBS.mat is missing.

Professor,
Here is the file with al the codes. Most of them are useless at this point but all is in this file. The script needed to be ran is Estimation_15.mod.

You can also run Simulation_15.mod if you want to simulate another set of observables for the estimation.

Thank you very much, sorry if it’s fuzzy right now

Multi_Pfeifer.zip (637.9 KB)

Please adjust your codes to work in Dynare 5.0. The steady state file is
Estimation_15_steadystate.m (4.5 KB)
I also used

       SIGMA_@{sectors[i]} , , inv_gamma_PDF,		Param(@{i},3),				0.1; 

to remove the invalid recursive definition. Now I get an error about stochastic singularity.

I will adjust the code for sure, do you need it to run the code for yourself ?

For stochastic singularity, I get it too now but I didn’t before. Are my shocks standard errors badly scaled ?

I would like to run your codes myself to see what takes so long. The stochastic singularity is the thing you need to fix. You should try to understand where it’s coming from.

Fast estimation has two major goals: 1) Never have a story, feature, epic, or project that’s unestimated; 2) Maximize the speed of estimation, while preserving the quality of estimation.

Faster estimation means your teams are more likely to estimate everything immediately upon creation. Having everything estimated leads to much stronger release predictability metrics. The only real question is: Does your estimation quality suffer when you start estimating faster?

Before there were story points, many teams simply counted every story as 1 point. Some were bigger and some smaller, but teams felt that it would all even out. And despite some variance due to story size, teams could still predict approximately how many stories they could get done for each sprint.

I’m sorry but I have no clue why you’re telling me this ! :slight_smile: