# Linear detrended data or demeaned growth rate data?

Dear Johannes,

If I would like to estimate a dsge model with no trend,(all shocks are transitory) using U.S. data, it seems that I can use following ways to detrend data(if not consider one-side hpfiler):

2. demeaned growth rate data.

a. Are both 1 and 2 OK?

b. If 1 is Ok, how to know which is the best detrend method among linear, quadratic and cubic?

c. If use 1, can I directly compare model moments(theoretical moments) with data moments by adding stoch_simul command after estimation command? I mean that this
is Unlike hp-iltered data(you have to use hp_filter=1600 in stoch_simul block).

d. Can 1 and 2 be jointly used to detrend data? Like y c I data use cubic detrending while some others use demeaned growth rate data?

Kind regards,
Huan

a.) Yes, both are OK.
b.) This is a matter of taste. Most people check whether the data after detrending look OK.
c.) Yes, this is the implicit assumption you are making. The linearly detrended data is equivalent to the stationary data coming from your model. Of course, you could linearly detrend your model variables as well, but most of the time, this will not do anything, because they are not really trending (unless you have really a lot of persistence so that the data can drift from the mean for a long time)
d.) Conceptually, this is not a problem. You can do this. But you might have to defend it to your referees, who might be skeptical that you have two different concepts of trend (of course demeaned first differences will also take out a purely linear trend)

[quote=“jpfeifer”]a.) Yes, both are OK.
b.) This is a matter of taste. Most people check whether the data after detrending look OK.
c.) Yes, this is the implicit assumption you are making. The linearly detrended data is equivalent to the stationary data coming from your model. Of course, you could linearly detrend your model variables as well, but most of the time, this will not do anything, because they are not really trending (unless you have really a lot of persistence so that the data can drift from the mean for a long time)
d.) Conceptually, this is not a problem. You can do this. But you might have to defend it to your referees, who might be skeptical that you have two different concepts of trend (of course demeaned first differences will also take out a purely linear trend)[/quote]

Dear Johannes,

Generally people use “credit spread” data in LEVEL during estimation. Could I ask if it is right to use linear detrend method to deal with credit spread data ?

Huan

That depends on your goal. In general your observable is what you model needs to explain. For stationary variables, we usually force the model to account for all movements. If you think that for some reason there is a trend in the credit spread that should not be explained by the model and its shocks, you can also detrend it. But you need to be able to justify this choice.

Hi Johannes,
I have more questions about data processing:

1. First difference filter: If we use de-meaned growth rate data for stationary model as mentioned in the post, how about interest rate series? Although we expect it to be stationary, often time this is non-stationary. Can we still take a first difference of interest rate just like output etc? Or, to make such rate variables stationary, can we use linear de-trending or 1-hp filter?
For moments matching: Is it that we need to calculate the growth series of Kalman filtred estimates and then calculate the second moments to match with data moments?

2. 1 sided hpfilter: After doing 1-sided hp-filter, we know the series becomes mean zero. I have 91 obs and say, mean of hp-filtered log(Y per capita) is -1.33E-03. Now if I scale the data multiplying by 100, the mean becomes -1.33E-01, which cannot be considered as mean zero any more. Is it because of smaller observation number? That means I have to demean the hp-filtered series to make it truly mean zero stationary. Is it wrong?

Thanks,