In my macro classes, calibration typically meant specifying some parameters (\alpha, \beta, etc) to solve for steady-state values and ratios like k/y, l/y, etc. Maybe because the US economy was the use-case and \alpha, \beta, \delta, etc are well known in the literature for the US economy, typically estimated using micro data.
Recently, I found on this forum that calibration could also mean specifying some ratios like k/y, l/y, etc. to get values for \alpha, \beta, \delta, etc since calibration is just a mapping between parameters and steady-state values. This approach kinda works for less developed economies where one cannot find parameter values in the literature but one can take long-run averages if data on k,y,l, etc. are available.
1. Are there any other alternative ways to calibrate a DSGE model in a data-poor environment where micro data is not available? Maybe there is.
2. Also, I guess no parameter is universally the same for all economies, right? I mean all parameter values are country specific?
3. Also calibrating a model using any of the two methods above does not necessarily imply that theoretical moments will match the moments in the data, right?
4. You have to use moment matching techniques, for example, IRF matching, etc. But you cannot use these moments to evaluate the model in this case. So how to evaluate the model if you use a moment matching technique
For parameters like the degree of risk aversion and Frisch elasticity of labor supply, we can for example start with values widely used in the literature for developed economies and then play around them to match some second moments for less developed economies, right? I know there are no guarantees of a good match for all second moments of interest, but matching just the great ratios will suffice? I mean, C/Y, K/Y, I/Y.
This is the stage where people loop over sets of parameters to find possible good matches?
Many thanks for the clarifications!! Maybe I am pushing this a little bit but lemme ask this one last question which is related.
I have read a number of papers trying to explain the consumption volatility puzzle (higher volatility of consumption relative to output). This paper, for example, concludes that âIf the informal sector is poorly measured, consumption is more volatile than output.â
But we can also make consumption more volatile than output just by calibrating some parameters to match desired second moments of interest.
So in such âcausalâ DSGE models, it is about comparing some novel model to a benchmark model to illustrate the effect the author wants to show, right?
Like in the example above, I guess it is the increase in volatility in the novel model that matters. Whether the increase was from 0.85 (in the benchmark model) to 1.5 (in the novel model) or say 1.6 (in the benchmark model) to 2.3 (in novel model) does not matter much, right? Because it depends on how we calibrate the benchmark model. Is my intuition right?
Also I am doing some experiments from Uribe and Schmitt-GrohĂ©âs book, it seems one can match some second moments by including a parameter n in the shock processes, for example, z_t = \rho z_{t-1} + n \epsilon_t, and then control it until you get the desired results. Maybe it is used in cases where the rest of the parameters canât lead to desired second moments, something like âlender of last resortâ,
The case you are describing is one where we first have to get the facts straight before you can go to modeling. If consumption is mismeasured, there is no point in writing down a model that provides a structural explaination of the mismeasured series.
Thanks a lot for the reply prof. Pfeifer.
Does âgetting the facts straightâ here mean making the necessary ânovelâ assumptions that underlie the model?
To illustrate that these âfactsâ (in the model) are the cause of a particular business cycle phenomenon, one should keep the same calibration in both the benchmark and the novel model, right?
My point was that maybe calibration does not matter much if you want to say that some âfactsâ or a set of assumptions lead to a particular result or phenomenon. In other words, the results should hold regardless of calibration, right?
No, what I am saying is: in the example you cited, people do not agree whether consumption is actually more volatile than output in emerging economies. It may be a statistical artifact resulting from measurement error. If you donât know what you are trying to explain, modeling will fail.