I’m exploring making some parameters of a model random. For example, in a simple RBC model, the depreciation rate is a parameter delta. Then I create a new model where delta is a variable, and delta=delta1+delta2z, where z(t)=rhoz(t-1)+e(t), and delta1, delta2 and rho are parameters. The new model solves with no problems. I would expect that when delta2=0, and delta1=delta of the first model, the two models should give qualitatively similar simulations. However, econometric tests reject that simulations from the two models are equivalent. I’m wondering why this is the case.
The reason I’m doing this is because I am interested in formulating tests of correct specification of a model, so I need a model that slowly drifts away from the true model. I had hoped to achieve that by letting delta2 slowly move away from zero. However, even when delta2=0, the two models are distinguishable by the test.
So, any comments or advice would be appreciated. I can provide .mod files if that would be helpful.
Yes, please send me the files.
The files are attached. The first model is the same model you helped me with a while ago, making my code Matlab compatible. This is a simple version, without the steady state file. The second is the same, except the depreciation rate is delta=delta1+delta2*z3, instead of a constant parameter. You’ll see that the steady states are the same, as expected. When I use the simulations, the test I’m working on rejects that the two sets of data come from the same model, when delta1= delta of Model1 and delta2=0. When I compare two data sets simulated from Model1.mod, the test does not reject that the two sets are from the same model (except for the occasional expected Type-I error).
Model2.mod (1.62 KB)
Model1.mod (1.25 KB)
your model has a high persistence and you are only using 180 periods. This might be too short. My guess is that the problem is with different random numbers. While each run of Dynare uses the same seed, having an additional shock (even if it is not used due to 0 variance) results in a different draw of random numbers. You should try using the simult_-command directly. That way you can make sure to keep the random numbers for the first two shocks the same when adding a drift in delta.
What puzzles me is that two data sets from Model1 are the same. How do you generate them? Completely different seeds?
Finally, you are using a third order approximation without pruning. This is not recommended.
Thanks for looking at this. I do 180 simulations, but the first 100 are dropped, and only the last 80 are used, to mimic 20 years of quarterly data, a scenario that seems realistic for applications. I don’t want to use long simulations, because real data doesn’t fit that case. I am doing Monte Carlo work, so I want all samples to be different. To achieve this, I set the seed before each simulation.
When I draw two samples from Model1, the samples are not the same, because the seed is different. However, the test does not reject the true hypothesis that the two samples come from the same model (which is true), except occasionally as is expected.
When I test that samples drawn from Model2, with delta2=0 and delta1=0.025 are equivalent to samples drawn from Model1, the test rejects. However, Model2 with this restriction is equivalent to Model1. I’m wondering if the fact that delta is a parameter in one case and a variable in the second causes the simulations to differ, because for Model2, delta is perturbed from its true value 0.025, which for Model1, delta is a parameter that is not perturbed.
I’ll look into using pruning, perhaps that is the problem.
delta being a variable cannot be the reason. You can see in the simulations and the decision rules that it stays fixed at 0.025. Moreover, I tried manual simulations for both models with the same random numbers by using
for Model1 and
The simulations for the variables are the same in this case.
OK, that confirms that, for you, things are working as I had expected. I haven’t yet compared the simulations directly. I will examine my code again, and will do simulations controlling the shocks, to verify that the same paths for the variables come out. A note, I’m using Dynare 4.4.1 and Octave 3.8.0.