Aggregatestry.mod (6.6 KB) changeoneparameternew.m (1.5 KB)
To who are interested in the topic,
I met a problem when I modify values for a parameter using loop. You may check the values for “results_mean” generated by the m. file and the codes work well. But when I change the parameter value manually, the means in “oo._mean” are not exactly the same as the counterparts in “results_mean”. You may say that the difference is tiny because only the numbers after five or six digits after dicimal are unequal. Yet if I modify two parameter values simultaneously in a much larger range of numbers, the difference turns out to be significant. I simply want to know why it happens and how to avoid it.
You are using simulated moments. When you call Dynare, it will reset the random number generator seed. Thus, the differences come from different random noise.
If you are experiencing large differences, your simulation length is not sufficient.
Thank you for your prompt response, like always! But I still have a doubt: the command line in the mod. file says
stoch_simul(order = 2,irf=60,noprint,nograph);
Dynare should compute the theoretical mean, right? Why does the simulation length matter?
You are right. Can you provide me with an example where you get big differences?
OK. Let me give you a relatively simple set of files as an example as attachedAggregatestry.mod (6.6 KB) changeoneparameternew.m (1.5 KB) .
Please first run the dynare file to get the theoretical means. Then run the m. file and look at the generate matrix "results_mean”. The dynare file uses the parameter value “0.004” which should point to the 15th column of the matrix. You can find a big difference.
Thank you for helping me!
I don’t see any difference when I do that.
I did again and now it seems OK to me as well. I don’t know what happened in the beginning. Sorry for bothering you!