Estimation problems & data treatment

Dear all,

I have been trying to estimate a medium sized DSGE model with Gertler-Karadi banks that have both government bonds and corporate securities on their balance sheet. I employ Spanish data to do so.

I have read a lot of blogs on this forum and I think I have made sure to avoid some of the standard mistakes that I find on this forum. I have treated the data by calculating quantities through dividing by the GDP-deflator and by a measure for the working population to arrive at real GDP per capita, real consumption per capita etc. Interest rates are converted into real rates by dividing by the GDP-deflator as well. All variables (including interest rates and inflation) go through the one-sdied HP-filter that I downloaded from this forum. Attached you find two sets of observables: historical.pdf which contains some of the variables for the period 1998-2007, and historical-fig-23.pdf, which contains data for the period 1998-2010. The last sample includes the financial crisis, and shows unit-root like behavior, which is the reason that I switched to 1998-2007 data at one point.

As I am not very familiar with what data should look like in a Bayesian estimation, I would be grateful if someone could tell me whether these data are suitable for estimation at all, or whether I have already made a mistake in my data-processing or data selection.

Then I put the model in loglinear form, also for inflation and gross interest rates. As I am employing the loglinear form in my dynare file, I link all observables and model variables (call them X_t, with loglinear counterpart X_t_tilde) through the following relation, as I believe it to be from dr. Pfeifer’s guide on Observation equations:

X_t_obs = X_t_tilde - steady_state(X_t_tilde);

I do not multiply by 100, something that I regularly read on this forum. Both the data from my one-sided HP-filter, as well as the loglinear deviations from the steady state are in decimals.

Then I start estimating. I employ the identification check at the prior mean, and my model does not flag any problems at this point. I start by using mode_compute = 6, and then move on to mode_compute = 9. I have checked that stochastic simulations of my model generate sensible results for financial crisis shocks, capital quality shocks and government spending shocks, and productivity shocks. I provide a separate steady state file so that I can match some specific first moment targets, while being able to estimate parameters that affect the steady state. After some debugging this file correctly calculates the steady state during the estimation.

The results are mixed, but the problem is one of the following three:

  1. the value of one or two parameters do not make sense economically,
  2. there is still a drift in one or more parameters.
  3. for the 1998-2007 period it is hard to find the mode.

As far as I understand there are no mistakes that I am aware of in the model, especially because the identification check in the beginning tells me that there are no problems. As I am relatively new in Bayesian estimation of DSGE models, I have the feeling that I might miss something that screws up my estimation, either in the data or at a later stage.

I would be very grateful if someone could take a look at my data, and if necessary at my files, which I will happily upload in that case. Thanks in advance.

historical.pdf (9.9 KB)

historical_fig_23.pdf (10.5 KB)

Hi, your post is a lot to parse and having the data file and the mod-file would help. From what you describe, there are a couple of questions/issues:

  1. Variables like real GDP per capita must be logged before one-sided HP filtering
  2. If you want to go from nominal to real interest rates, you need to divide by expected inflation (or a proxy thereof), not the price level.
  3. It is not clear what frequency your model has and whether the issue of annualized interest rates is correctly dealt with

First of all, thanks a lot dr. Pfeifer for being willing to take a look at my problem!

Attached you find a zip-file with the necessary files and the data.

  1. I did not mention so, but I do take logs before my data go through the HP-filter.
  2. Conversion from nominal to real rates occurs through division by the inflation rate and not the GDP-deflator as I incorrectly wrote above. I have no measure of expected inflation, so I divide by current inflation. Or should I divide by next period’s inflation?
  3. My model is at quarterly frequency. The original data contained annual interest rates in percentages (they were downloaded from the ECB statistical data warehouse and Eurostat). I convert them into quarterly rates by dividing by 400 and adding 1 to get gross quarterly interest rates.

Thanks again for taking a look!GK_dynare_estimation.zip (39.4 KB)

  1. I see. This is correct
  2. Taking the current inflation rate is definitely wrong. You could proxy expected inflation by the realized future inflation. Under rational expectations you should be right on average. But is there a reason you cannot use the nominal interest rate as an observable?
  3. That is also correct.
  1. I did not think of doing that, but I agree that this should be right on average. I do have the nominal interest rate in my model (which applies to bank deposits), but also long term government bonds, for which I want to use the real return. I have also tried estimating the model without the real return on bonds as an observable (and eliminating a shock to prevent stochastic singularity), but the problems still persist (in the model version with only the nominal interest rate).

Ofcourse I will try your suggestion regarding division by next period’s inflation rate as a proxy for expected inflation, but does such a mistake has the potential to screw up your estimation?

  1. What is the problem with your estimation in the first place?
  2. Stochastic singularity applies when you have more observables than shocks not the other way round. So when you drop an observable, you can generally leave the shocks as they are
  3. Are you already using the inflation rate as an observable? In its current form, it may be an issue of you use real and nominal interest rate as well as the inflation rate.
  4. I don’t get the point about long-term government bonds. Are those real bonds? And how do you real with the maturity structure? A linear model will not have a term structure.
  1. My problem is that I cannot find the mode when I employ the time series from 1998-2007 and there is no convergence for a few estimated variables when I employ the time series from 1998-2010, as well as the estimated value of a key parameter having a value that is econonically wrong.

  2. You are right, thanks for pointing this out.

  3. I do employ inflation rate and nominal interest rate as observables, but I do not use the real rate on deposits (which is computed by dividing the nominal rate by (expected) inflation) as an observable. The real return on government bonds is an observable in my estimation.

  4. I have long term government bonds with infinite maturity but decaying coupon payments as in Woodford (1998, 2001). These are currently nominal bonds, but I only have the real return on bonds as a variable in my model (although I could easily introduce a variable for the nominal return on these bonds). Therefore I compute the real return on government bonds from the data as an observable. Now that I am thinking about it, I should introduce the nominal return as a model variable and use the nominal return from the data as an observable.

Implement 4) and then please provide me with the files if the problem in 1) persists.

Dear dr. Pfeifer,

I have followed your suggestions, and the model still cannot find the modeGK_dynare_estimation_August_2018.zip (39.6 KB)
. Attached I upload a zip-file containing my files. I would be very grateful if you could take a look at them.

One problem may be that your data is not mean 0 during your sample, while it should be according to your observation equations. The big problem seems to be that your data pushed the model to the boundary of the determinacy region and kappa to the range where the steady state becomes complex. Maybe there is a good economic reason for this.

Dear dr. Pfeifer,

First of all thanks again for taking the time to take a look at my code, it is really appreciated!

Looking at the data you are right about the mean 0 problem. However I am slightly puzzled, as I have used the one-sided HP-filter for all my time series, just as you recommend in your “Guide to specifying observation equations…” paper. In there it says on page 34 of the version of June 29, 2017 that “data filtered with the one-sided HP-filter will always have (approximately) mean zero”. It could always be that my data are the exception to this rule, but how should I proceed in that case? Could I simply demean the data after having filtered it with the one-sided HP-filter?

In principle I think I cannot come up with a reason why the data are pushed to the boundary of the determinacy region, although theta, which denotes the exogenous probability of bank survival in the GK model, is usually close to 1 in the financial frictions literature. What reasons could make it economically reasonable to push the data to the boundary of the determinacy region?

Before you posted your answer I have tried estimating the model with fewer parameters. When I only estimate the AR(1) parameters and the standard deviations I get that the estimation converges, and I have also achieved this while estimating the Rotemberg parameters kappa_p and kappa_w. I did so with 9 shocks in the model and 8 observables. Could it simply be that one or two parameters are screwing up the estimation?

  1. Small deviations from 0 are always possible. But it sounded that you filtered the data first and then tested different subsamples. If you are doing so, then the data that is mean 0 for the full sample will not be mean 0 on the subsamples.
  2. You can try whether demeaning does help.
  3. If you can identify parameters that introduce problems in estimation that is usually helpful. Have you tried estimating the parameters sequentially? That is: add a new parameter and set the initial values for the already estimated ones to the posterior model from the previous run? That reduces the dimensionality of the problem and makes issues with mode-finding less likely. It also shows clearly where the problem appears and might give you an economic indication on what the problem is.
  4. Regarding \theta: you need to think about which data moments this variable particularly affect and where it helps to have such a value.
  1. You are right. After looking at my filtered data again, I see that I take a subsample after filtering. I agree with you that the mean of such a subsample is not necessarily close to zero. I am now redoing the filtering for each period, so that in each period the mean of my data is approximately zero.

  2. see 1.

  3. I am doing this now, and I have figured out that the problem is when I estimate the (wage) inflation indexation parameters in conjunction with the Rotemberg parameters kappa_p and kappa_w.

  4. I am going to do this, and I think I will need measures for the return on corporate bonds and return on govt bonds (the last I already have).

I managed to extend my time series substantially to cover 19 years, that should also help in the estimation.

I think I have enough information now to proceed, but thanks a lot dr. Pfeifer for your time and patience!