I have a normalized utility and production function in my model. I have computed the CPOs according to that normalization and I have calibrated to model from these at SS. My question is how to code normalized functions in Dynare.
Should I declare all points of normalization as parameters ? But shouldn’t the point of normalization move through the years? In my case, the point of normalization Ive chosen is the sample (geometrical) average as advised by Klump et al. (2011). For example, the geometrical average for capital would be 11 billion euros from 1995 to 2017. But as I run the model on Dynare, this value should take into account the years after 2017, or the point of normalization will stay at the value based on 1995-2017 data.
Or should I just enter the model in the non-normalized case, but that would be bizarre as my substitution and distribution parameters are calculated from a normalized form and not a non-normalized one.
What is your problem with the coding? Because in the end, you need to set parameters to particular values. I don’t think there is anything special in terms of coding.
It is the same for the CPOs right ? Meaning, We can code them in the non-normalized form (steaming from my normalized CES functions, We have normalized CPOs).
If so, we will then use the non-normalized form for the model-block and the parameter values computed form the normalized form.
Yes, that will work as long as you correctly keep in mind that \alpha_k in the non-normalized model will be a function \omega_k, VA, K and \sigma_{\nu a}
Hello, I have somewhat relevant questions, and maybe a concept-check with anyone of you the re-parameterization method of Cantore et al.(2012). I wonder if values of the re-parameterized parameters change over time when I solve the model under perfect foresight? Like, do I code in a way that I re-parameterize alppha(s) of CES for every simulation period. I feel like that’s the case…
I understand that we normalize/re-parameterize alppha(s) of CES production to get rid of dimensional problems. However, if I want to study a transition of one steady state to another, say, in response to a simple technology shock, wouldn’t the re-parameterization (time varying) mute the transition dynamic?
No, usually the normalization is done once for the steady state or mean. Essentially, you fix parameters based on some information you have and then keep them fixed.
Pay attention to the substitution elasticity parameter! If your model is in error (particularly if the steady state gives negative values), the value of this parameter could be the source, because your model is expected to be very sensitive to this parameter. This is the reason why many researchers avoid using it (using only Cobb Douglas Function)!
Hi all. I am back to this thread with a new question.
I wonder if it makes sense to assign a perfect foresight path to the parameter governing a share of input factors in CES production function, say \alpha_k.
I want to study how increase in the importance of k impact the economy over time. I understand that the perfect foresight analysis will give me trajectories of economic variables, say y, across time as k gradually become more important in the production process. Can I do this given all the normalization issue?
I would tend to say that this should be fine. You start with a properly normalized CES and then vary the parameter. As long as that is the research question you are interested in, it should work.