Calibration for less developed economies

In my macro classes, calibration typically meant specifying some parameters (\alpha, \beta, etc) to solve for steady-state values and ratios like k/y, l/y, etc. Maybe because the US economy was the use-case and \alpha, \beta, \delta, etc are well known in the literature for the US economy, typically estimated using micro data.

Recently, I found on this forum that calibration could also mean specifying some ratios like k/y, l/y, etc. to get values for \alpha, \beta, \delta, etc since calibration is just a mapping between parameters and steady-state values. This approach kinda works for less developed economies where one cannot find parameter values in the literature but one can take long-run averages if data on k,y,l, etc. are available.

1. Are there any other alternative ways to calibrate a DSGE model in a data-poor environment where micro data is not available? Maybe there is.
2. Also, I guess no parameter is universally the same for all economies, right? I mean all parameter values are country specific?
3. Also calibrating a model using any of the two methods above does not necessarily imply that theoretical moments will match the moments in the data, right?
4. You have to use moment matching techniques, for example, IRF matching, etc. But you cannot use these moments to evaluate the model in this case. So how to evaluate the model if you use a moment matching technique

  1. Usually, there are three ways of setting parameters:

    1. Using micro-estimates.
    2. Doing informal moment-matching, e.g. based on the great ratios
    3. Formal estimation using either full or limited information techniques

    Option 2 often has the least data requirements

  2. I doubt that parameters are exactly the same for all economies, but priors differ on whether the difference between countries is the structural parameter values or the underlying structure of the economy

  3. Targeting the great ratios is a form of moment matching. Thus, you will usually match these very moments, but of course in general not the untargeted ones. I guess you were referring to those in your question.

  4. Even with moment matching there are usually unrestricted/untargeted moments you could check. Or sometimes there is outside evidence you can look at.

For parameters like the degree of risk aversion and Frisch elasticity of labor supply, we can for example start with values widely used in the literature for developed economies and then play around them to match some second moments for less developed economies, right? I know there are no guarantees of a good match for all second moments of interest, but matching just the great ratios will suffice? I mean, C/Y, K/Y, I/Y.

This is the stage where people loop over sets of parameters to find possible good matches?

  1. The problem with the Frisch elasticity and the risk aversion is that they do typically not affect the great ratios as they matter for second moments but not first moments.
  2. The type of match you will be able to achieve typically depends on whether your system is overidentified or just identified. For as many target ratios as parameters you can often achieve a perfect match (essentially N equations in N unknowns; only nonlinearity sometimes prevents the existence of a solution).
  3. In the just identified case with only levels you can often work out the results analytically. If second moments are targeted, you often need to uses a computer, e.g. grid search or moment matching with an optimizer.
1 Like

Many thanks for the clarifications!! Maybe I am pushing this a little bit but lemme ask this one last question which is related.

I have read a number of papers trying to explain the consumption volatility puzzle (higher volatility of consumption relative to output). This paper, for example, concludes that “If the informal sector is poorly measured, consumption is more volatile than output.”

But we can also make consumption more volatile than output just by calibrating some parameters to match desired second moments of interest.

So in such “causal” DSGE models, it is about comparing some novel model to a benchmark model to illustrate the effect the author wants to show, right?

Like in the example above, I guess it is the increase in volatility in the novel model that matters. Whether the increase was from 0.85 (in the benchmark model) to 1.5 (in the novel model) or say 1.6 (in the benchmark model) to 2.3 (in novel model) does not matter much, right? Because it depends on how we calibrate the benchmark model. Is my intuition right?

Also I am doing some experiments from Uribe and Schmitt-Grohé’s book, it seems one can match some second moments by including a parameter n in the shock processes, for example, z_t = \rho z_{t-1} + n \epsilon_t, and then control it until you get the desired results. Maybe it is used in cases where the rest of the parameters can’t lead to desired second moments, something like “lender of last resort”,

The case you are describing is one where we first have to get the facts straight before you can go to modeling. If consumption is mismeasured, there is no point in writing down a model that provides a structural explaination of the mismeasured series.

Thanks a lot for the reply prof. Pfeifer.

Does ‘getting the facts straight’ here mean making the necessary ‘novel’ assumptions that underlie the model?

To illustrate that these ‘facts’ (in the model) are the cause of a particular business cycle phenomenon, one should keep the same calibration in both the benchmark and the novel model, right?

My point was that maybe calibration does not matter much if you want to say that some ‘facts’ or a set of assumptions lead to a particular result or phenomenon. In other words, the results should hold regardless of calibration, right?

No, what I am saying is: in the example you cited, people do not agree whether consumption is actually more volatile than output in emerging economies. It may be a statistical artifact resulting from measurement error. If you don’t know what you are trying to explain, modeling will fail.