Small Question about Bayesian Est

Hi Johannes,

I thought it would be better to ask this on the board, so don’t worry about the email I sent.

How should data on observables be fed into the estimation process? This became an apparent problem to me when I fed in hours worked, which was normalized to one, and estimation worked “ok”, but was not fully robust (as your recommended Iskrev article points out); when adding in other observables, such as capital and output, I tried many variants of detrending – first difference, FD and divide by the long run average, and growth rates – but none seem to work. Simple question, but very important for estimation!

Thank you as always for your advice,

Complicated answer in “A Guide to Specifying Observation Equations for the Estimation of DSGE Models” at sites.google.com/site/pfeiferecon/Pfeifer_2013_Observation_Equations.pdf

I have made good experiences using the one-sided HP filter (ideas.repec.org/c/dge/qmrbcd/181.html).
Also in the case of bringing labor data to your model, be careful as in most dsge models labor is normalized between 0 and 1, whereas in the real world data they are given as total hours worked or something similar. So the best way for labor of bringing model and data together is to use growth rates like; dL = log(L) - log(L(-1)).
Another pitfall is the inflation rate:
When you want to use inflation as your observable variabel then be carefull how you defined inflation in your model: 1. PI = P(+1)/P -1 or 2. just PI = P(+1)/P. If latter is the case then you have to add 1 to the growth rate you have calculated from a GDP deflator (or something similar) using log differences.

To summarize:

  1. Save real world data measured in nominal units (like dollar, euro etc.)
  2. Price adjuste them by dividing the data by a deflator (GDP deflator for instance)
  3. Log the data.
  4. Detrend the data by a filter (e.g. one-sided HP filter).
  5. Demean the data in case their mean is not zero.

For growth rates:

  1. Save real world data like labor hours worked or GDP deflator.
  2. Calculated net growth rate with log differences e.g.: dL = log(L) - log(L(-1)).
  3. Be carefull if in your model you use gross or net growth rates. If you use gross growth rates then you have to add a 1 to your growth rates calculated in step 2.

(Some hint: I have made good experiences using compute_mode = 10 in the estimation command of dynare).

1 Like

This is all excellent information. Johannes – that is a wonderful guide you produced; I am sure many others throughout the world will find it very valuable – a go-to source. I was reading Canova’s JME articles – I can imagine the controversy during the 1980s.

For hours worked, Daniel, would you suggest removing the normalization to one? It should be okay to leave it as is (?). Using growth rates for one variable and not for other(s) would be problematic for estimation?

Lastly, I am assuming that all aggregates need to be detrended – not just items like output, but also capital and even less-common variables like aggregate pollution emissions? I will keep trying the different filters. The one-side HP produced some slightly peculiar results though – when I use the “cycle” (not the trend) matrix that’s produced by the filter, values very largely from year to year. One year it might be in the thousands while in another it’s “small” (a “casual” interpretation). This led the Bayesian estimation procedure to fail with a non-PSD Hessian.

EDIT: I was able to get it working and the parameter estimates are reasonable, but the steady state predictions are not. Do you think that the root of this problem is due to the perturbation methods finding the wrong steady state? If so, I must try to learn how to put up an expression of the steady state even though a closed form solution is not possible. Another remaining question is how best to treat hours – to normalize, or leave as an aggregate and then detrend with HP like the other variables. (I tried using it as an aggregate, taking the log, and then using HP on it.)

Again, really appreciate your spot-on advice!

Of course every variabel like investments, consumption etc. has to be detrended not only output. I don’t think that using growth rates and simultaneously %-dev from a trend niveau is problematic, as one commonly uses also inflation in the estimation which is nothing else as a growth rate of prices.
Regarding the one-sided HP filter have you log the data before detrending them(?), since log makes different measurement units equal.
Regarding steady state:
You should do it like the following:

  1. Write just after the parameter section and before the model section of the mod.file a command which excute a m-file which calculates using a standard min-function of matlab the steady state for given parameter values.
  2. Then save the result as say Yss, where Yss is the steady state value for Y and Yss is given as a parameter in the mod-file.
  3. In the initval-section, you just then write Y = Yss. (always also remember to use the steady_state command in your observation equation, but I quess you have done it already, as your estimation finds reasonable estimates).

Hope that helps.

As detailed in the guide, mixing different types of observation equations like one-sided HP filtering and first differences poses no theoretical problem. The only issue is one of consistency. Usually, referees want to see a consistent treatment. When using different filters, you are filtering out different frequency components you think do not belong to the business cycle domain. Arguing that you use different passbands for different variables might be hard.

You need to filter all non-stationary (as in integrated) variables.

Regarding the treatment of hours: for consistency, you might opt for using the same filter. But be aware that this might lead to trouble in estimation. By filtering you are putting low-frequency components potentially unrelated to growth into the stopband of your filter. This might then obscure problems in estimation and lead to strange results. For example, labor tax rates exhibit near unit-root behavior. In estimation, this might introduce parameter estimates that also imply near-unit root behavior of hours worked. However, because you are filtering out those low-frequency components, those problems for hours might not even appear in estimation. You will only see that the resulting parameter estimates are unplausible. I will add a section on this into the guide.