# Second moments

Dear Johannes,

Can I ask you a question about second moments of observe variables after estimation?
My model is one-sector model without a specified trend, getting the trend out of trending variables like output, I tried both first-difference filter and one-sided HP-filter, separately.
For first-difference filter, I demean the data to take out the mean. Hence, output, consumption, investment and hours all have mean 0.
After estimation, the absolute value of standard deviation of observables are quite high which are much higher than the data standard deviation. However, the relative standard deviation are similar. These confused me a lot. Do you have any suggestions about such issue?

Catherine

Did you express everything in percentage deviations? Or did you keep the levels of the variables?

Hi, Johannes,

I take log-linearzation of the model by hand, so every variable in the code represents percentage deviation from steady state( which are all 0).
The measurement equation for** demeaned growth rates data:**

```y_obs=y-y(-1)+y_ME; c_obs=c-c(-1); i_obs=i-i(-1); h_obs=h;```

The measurement equation for** one-side hp filtered data( cyclical component):**

```y_obs=y+y_ME; c_obs=c; i_obs=i; h_obs=h;```

Both model second moments are much larger than correspoding data second moments.
Many thanks,
Catherine

How big are the differences? And how many shocks do you have? And are the data also percentage growth rates?

I add four shocks: sunspot shock, transitory technology shock, preference shock and measurement error to output.
The data are percentage growth rate, for example, y_obs=log(Ypc,t)-log(Ypc,t-1)-mean(log(Ypc,t)-log(Ypc,t-1))
h_obs=log(Hpc,t)-log(mean(Hpc))

After estimation, e.g.the data y_obs standard deviation is 0.87, model y_obs standard deviation is 1.62
the data h_obs standard deviation is 4.91, model h_obs standard deviation is 15.87

Unfortunately, this is not entirely unheard of. In Born/Peter/Pfeifer (2013): Fiscal news and macroeconomic volatility, we use the `endogenous_prior` option for this reason. You should try whether this helps. If yes, the actual estimation might be fine.

Hi Johannes,

After using this code, my problem has been solved. Thank you so much!
Could you explain a little bit more about endogenous_prior , what is the mechanism to decrease the second moments and is there any disadvantage using this code?
I find that after using this code, the mode check plot is a bit different, especially for the persistence of technology shock and preference shock, shown on the attached file.

Many thanks,
Catherine
CheckPlots1.pdf (6.77 KB)

See the manual on this option and the reference therein. The Christiano/Trabandt/Walentin endogenous prior is based on “Bayesian learning”. It updates a given prior using the first two sample moments.

Dear Johannes,

Could I ask 2 more questions related to this issue:

1 .I see you answered someone else “for the endogenous prior you need to specify a full prior”, I do not quite understand this.Just want to make sure that if I should use endogenous_prior like this?

`estimation(endogenous_prior, datafile=.....)`;

2.To avoiding measurement error hitting upper bound, I increase the value of 4th parameter, therefore it no longer hits bound anymore, but at expense of accounting too much Variance decomposition. Is this worse than hitting upper bound?
Many thanks.
Catherine

1. This refers to the fact that the endogenous prior option updates the normal prior you specify in the estimated_params-block using the data moments. For that reason you need to have both a full estimated_params-block and `estimation(endogenous_prior, ...`
2. There is no general rule to this. If you consider the high variance share of the measurement error a priori as unlikely, I would use an informative prior for it. Do not use an upper bound but use e.g. an inverse gamma distribution for the measurement error.

Hi Johannes,

I use endogenous_prior command, and set the 4th parameter of the standard deviation of measurement error to output y_Me to be 100% of standard deviation of data (generally people set 25% or 33%) and use uniform_pdf. The estimated result shows , the posterior mean of std of y_Me is about 50% of corresponding data std. So the variance decomposition after stoch_simul **should **be about 0.5^2=25%. How it actually is 7% shown in the command window. Is this weird?

Many thanks,
Catherine

I don’t know exactly what you are doing, but the variance decomposition is presumably the one at horizon infinity. In that case, persistence of the endogenous variables might play a role. Measurement error is typically i.i.d.

Dear Johannes,

endogenous_prior command indeed helps model second moments matching the data second moments (both in growth rate). However, when I use one-side HP or band-pass filter data, the model standard deviation still larger (about 2-4times) than data standard deviation. Do you know why it has such difference?

Thanks a lot!

Catherine

Which objects are you comparing? The theoretical model moments to filtered data? If yes, that comparison is wrong. You would have to look at filtered model variables as well.
More fundamentally, what you ask of your model is hard. You want it not only to match particular moments, but also in a particular frequency band. This is a tall order that most probably the model is not really capable of.

Thank you Johannes, your answer is very important, without your help I would not have known the comparison is wrong.

As far as I know, people generally compare either simulated model moments (with periods = integer using stoch_simul command after estimation) or theoretical model moments to filtered data moments.

If I would like to compare theoretical model moments to filtered data,

1. If I use one-side hp filtered data,
should I use `estimation(data=... filtered_vars);` to compare filtered variables moments with data moments

or Not use filtered_vars command, but use

`stoch_simul(order=1, hp_filter=1600);` after estimation to get model moments? (But data is one-sided , while hp_filter command is two-sided ?)

1. If I use bandpass filtered data to estimate, which is not recommended by you, should I use filtered_vars command
or Not use filtered_vars command, but use

``` stoch_simul(order=1,period=same number with the data);``` after estimation and then bandpass the simulated data and compare its moments with real data moments?

Many thanks,
Catherine

You are misrepresenting what I said. To estimate model parameters using Bayesian techniques, you must not use two-sided filters. However, when comparing moments from the estimated model to the data, you are not restricted by such considerations. You just need to be consistent in comparing processed data to the same object from the model. The particular processing choice is up to you. The most common one is to compute growth rates in the data to growth rates from the model or HP-filtered data to HP-filtered variables from the model.

Sidenote: the filtered_variables command does not conduct filtering of the data, but provides you with one-step ahead forecasts.

If you are willing to not consider the full posterior distribution, you can use

``` stoch_simul(order=1, hp_filter=1600);```
after estimation and compare the moments to HP-filtered data.

Many thanks. If I understand you correctly, firstly I use one-sided hp filtered data to estimate the model, then use

``` stoch_simul(order=1, hp_filter=1600); ``` after estimation and compare the model moments(presented in the command window) to the moments of **two-side HP data **,instead of moments of one_sided hp data?

I would usually prefer using first differences for estimating the model, but what you describe should work. But given that you want to use the one-sided HP-filter for estimation, there is no reason for not using the one-sided HP-filter for the moment comparison.

Dear Johannes,

If using inverse gamma distribution for the measurement error, and the estimation results show this measurement error account for 40% variance of observable variable(there is highly misspecification in the model actually). Is it too large to be acceptable ?

Many thanks,
Huan

Hard to say. You should look at standard deviations, which usually have a more natural interpretation. But to me this looks too large.