Questions on parameter estimation

Dear Professor Pfeifer,

Thank you for reading this post.

I have got some questions when dealing with the DSGE model. Would you please give any advice ? Thank you so much.

(1) When I calibrate the parameters using actual data, do I need to choose a longer data series ? For instance, I use the mean value of data from 1990 to 2020 to do the calibration, and use data series from 2000 to 2020 to do the bayesian estimation. Could it be right ? Or I should use data series between identical time interval ?

(2) When I calibrate the parameter of capital depreciation, I got 0.12 which seems a very large number as it is normally set between 0.02-0.04. In this case, the quarterly rental rate of capital is even larger than 12%. How should I interpret this big value ? Does it make sense in the real world ?

(3) When doing the bayesian estimation, which data series should I choose to match investment in the model, fixed asset investment or gross fixed capital formation ?

Thanks again for your reading. Any advice will be appreciated.

  1. If calibration is about long-run averages, then yes, you may use the longest time series. But why don’t you use the longest series for estimation? Is there a break? That would change things.
  2. That would imply 48% annual depreciation. That’s crazy. Did you do a mistake in time aggregation? That would cause a factor of 4.
  3. That depends on the model. I think gross fixed capital formation is more common.

Dear Professor Pfeifer,

Thank you for your reply.

  1. Yes, as you said, some data series used for estimation only start from 2000 while other normal data series(e.g. output, consumpsion) could be much longer.

    And also, some big policy reform happened those years, which may impacted the practical data before and after. I don’t know if I should consider this policy factor when choosing the time range of data.

    Could I use long-run averages to do the calibration and shorter series to do the estimation? Or I should use the same long-run series add many NaNs for estimation( will this impact the reliability of estimation results) ?

  2. This problem is related to the first one. If I use long-run averages of I/Y to calibrate this parameter, it could be much smaller. While if I use the data started from 2000, the value is as big as 0.12 then. May this caused by the fast developmemnt of the country?

Thank you for reading. Any advice will be appreciated.

  1. Why don’t you then use the full sample and estimate the model with missing values? The Kalman filter can handle that.
  2. If there are trends like that in the model, you need to think hard about how to proceed. Search the forum for similar posts. There should be guidance there.

Dear Professor Pfeifer,

Sorry but I’m not following your advice for the second question. Which topic should I search in the forum?

Thank you.

For example