I met with an issue in my code that the expected result cannot be generated. I am wondering if it’s because of the way I am using for steady state solutions, my model includes lots of sectors independently. In order to solve the steady states from these groups of equations, I follow lots of paper method that sacrifices the degree of freedom from exogenous variable parameters for critical values like gdp normalization Y=1 or sector ratio like Y1/Y2=1. My question is very simple. If I took this method that most of other papers use, will it affect my calibration later ? How people deal with this problem in calibration for comparative statistics ? If their simulation codes could only be solved with normalized output ? Do they control play with different normalization to fix the exogenous variable parameterized values ? Not sure if I explain my questions clearly or not, but if anyone has experience on these issues. I would be happy to learn your knowledge. Thanks
If your model features constant returns to scale, then the scaling with TFP is arbitrary. It does not matter whether you normalize when you consider percentage deviations from steady state.
Many thanks, Johannes. Can I ask another question about impulse response function interpretation ? Basically, I set up an expression for stock return as ret(t)=Price(t)/(Price(t-1)-Dividend(t-1)) and make experiment on exogeneous shock occurs at t. Then, I draw the impulse response curve for exogenous x on ret (ret could be obtained through Euler Equation). I am wondering whether I should interpret this return as realized return (risk that materialized at t) or expected return since dynare conducts lots of simulation experiment that starts from t-1 and average the response with large number of simulations , so according to large laws of number, it should be explained as expected value (expected return) ? I am prone to the second one but really want to hear of your advice on this.
Sorry, but you need to more carefully explain what you are doing. Where does the large number of simulations come in?
BTW, professor, I have a second question regarding estimation procedure. I notice that Dynare now has a new block to help deal with moment matching problem like SMM. Do you know where I could find references to this part ? I plan to match target moments when the code issue is solved. I want to have some template for moment matching part. Ideally, if it could match with multi sector model that will be better. Thanks !
There are various examples at tests/estimation/method_of_moments · master · Dynare / dynare · GitLab
Professor, do you know where I could find references for the option settings of the template codes ? The template codes are well structured but both the video and the template have no details about the parameter setting instructions. It will be very helpful if possible. Thanks !
Sorry, professor, I have one more question for the RBC template code you provide. In the RBC running, the final result doesn’t have estimation for standard errors of estimators (NAs), but as far as I know in SMM or GMM should include direct or indirect inferences on this part. Is there anyway that I can adjust the option to show these parts ? Like the screenshot, it seems that the block could only show the conclusion on overidentification but not the estimator inference.
- The options are documented in the manual.
- If you get a proper interior solution, the standard errors should be displayed.
Professor, could you please help me take a look at why my SMM code doesn’t work ? I think I did exactly as the template shows, but here are the error messages:
Computing data moments. Note that NaN values in the moments (due to leads and lags or missing data) are replaced by the mean of the corresponding moment
method_of_moments: The steady state at the initial parameters cannot be computed.
Error using print_info (line 32)
Impossible to find the steady state (the sum of square residuals of the static equations is 74277.6739). Either the model doesn’t have a steady
state, there are an infinity of steady states, or the guess values are too far from the solution
I have checked the data twice and the calibration code should work perfectly alone if it’s not running through the SMM code. Considering that the SMM tool is a little black box, I am wondering whether this issue is because of the large boundary setting, so the optimization goes to a region that even my calibration code fails to work. The attachments are my code for optimization and mat file for real data.
ClimateDSGE_SMM.mod (12.3 KB)
ClimateMacroData.mat (7.3 KB)
You did not provide all files to run the model.
Climate_Habit_common.txt (8.7 KB)
Sorry, Professor. For some reasons, the system cannot upload the inc file. I transfer it into text file. I think you need to change it into inc for running.
It seems you need to use a proper steady state file so that the steady state can be computed for different parameter draws.
Thanks, Professor. There is another question regarding this solution. I compare your template code with my code and find in most of the cases, you are using steady_state_model block instead of the initval block. Does it matter here for solving the SMM ? If I didn’t miss it, I remember that steady_state_model block uses the internal optimization to solve multiple equation. Usually, this will lose accuracy and hard to find zero solutions to equation. Does it work better in this SMM/GMM code , so it might be better for me to follow exactly the same as you did in template ?
steady_state_model-block is for providing the exact steady state to Dynare for a given set of parameters. The block will be executed whenever the model is solved. No optimization will take place. It’s you job to provide this solution. AFAIK, your current code does that already, but only for the initial set of parameters.
Thanks for your help, Professor. I did fix the issue but there are other issues coming out. The errors report that the numerical solution cannot be solved. I guess that the numerical state variables jump to some undefined regions for estimation. I have limited the regions for parameters close to my feasible calibration but the estimation still jump into some undefined regions in iteration. I kind of give it up for this block. My feeling for this block is that it’s one way to find the local solution and not that general to all kinds of models. Especially, in my case, the numerical steady state is only sensible within a small region. Maybe, this estimation block is too inflexible to be adjusted within the region. This is my conjecture. Please let me know if you have any suggestions
Please provide the most recent version of the code.
Why do you still have hardcoded parameter dependencies like
alpha1=(delta)^(eta); alpha2=delta-(delta)^(1-eta)/(1-eta)*alpha1; gamma=find_gamma(kappa, eis, alpha, nu, rhocm, mu, phi1, beta, delta, Ns, ass, sigma, pi1, pi2, xi, lambda, Lss, Lss2, rhoa, gammah, gammal, rhoL, varphi, omega, xsi);
I see, professor. So, I don’t need it once I have the blocks on steady_state_model. BTW, professor, I just switch with different optimizers and I find that the optimizer is the most efficient global solution method. At least, it’s very fluent in searching for local solutions. Maybe, it’s better to show the comparison of these different algorithm in some handbook.