Basic RBC model and business cycles moments

Hi,

as I get too low volatility of investment relative to output in the extended RBC model I’m working on (investment is less volatile than output), I decided to go back to the most basic RBC model to understand how it succeeds in replicating the high volatility of investment relative to output.

So I relied on this presentation of the basic RBC model by Sims (www3.nd.edu/~esims1/stylized_facts.pdf) where he displays the model-generated business cycles moments which replicate the high volatility of investement relative to output.

However, when I tried to replicate the model, I couldn’t find at all the same moments (even though I rely on the same calibration of parameters). In particular, I find that investment volatility is lower than output volatility. I tried hp-filtered moments as well, simulated moments rather than theoretical ones, and second order versus first order approximation, but I still cannot find the moments that Sims displays in the last table.

The model could not be more simple, so I don’t understand what I am missing here. I attach my code below.

Similarly, when I run the basic RBC model code proposed on the Dynare examples page, I see that it misses the relative volatility of investment. But as far as I know, the high volatility of investment relative to output is not seen as a puzzling feature of the data in the literature and is replicated in most models, so where does it usually come from ?

Many thanks for any help !
BasicRBC.mod (650 Bytes)

I think I figured out the answer to my question :

standard deviations for each variable X are actually relative standard deviations (meaning relative to their mean) :
sqrt(variance(X)/mean(X)).

I guess this is obvious to anyone but that wasn’t to me, as this is never specified anywhere.

The mistake you did is that you took his model, which is in levels, but the standard deviations are for the logarithms of the levels. At first order, this is equivalent to what you do with

sqrt(variance(X)/mean(X))

because this is the Jacobian transformation.

Ok, thank you !

If I understand well, we never compare the mean of the model-generated series to the mean of the data series ? Is it because series are demeaned anyways when applying the hp filter or because the order of magnitudes of both kinds of series are not comparable (even if we take per capita variables in the data) ?

Thanks again

Usually, it is both. Due to constant returns to scale, the means are often meaningless as you can multiply with an arbitrary TFP number to get any level. And HP-filtering is going to remove the mean in any case.

Ok, thank you !

However, I still don’t really understand the role of the hp filter for explaining the results I get from my model.

I compare two models regarding their ability to explain the business cycles, as usual. One model displays much more volatility in the logged variables relative to the other model before hp-filtering (2 sided, with the standard Matlab hpfilter command). But once I hp-filter the model-generated series, the second model displays more volatility than the first one.

I know that I am supposed to hp-filter my model-generated series as I also hp-filter the data series but I don’t really see how this makes sense for understanding the propagation mechanism of the model. I am wondering why hp-filtering model-generated series (with no trend) can lead to alter the ability of one model to replicate the volatility in the data relative to the other. The first model displays oscillating impulse response functions to standard shocks whereas the other one displays standard impulse response functions, could the difference between hp-filter/non hp-filter volatility results from this ?
As far as I understand, when there is no trend in my series, the hp-filter will only modify the frequency of the fluctuations.

When I use one-sided hp filtering, the differences between the two models before and after hp-filtering are a bit less accurate.

To sum-up, my question is : why is it that hp-filtering the model-generated series make one model displaying more volatility than the other one whereas this model was displaying less volatility than the other one before hp-filtering ? The IRFs are obviously not hp-filtered and they help understand what’s going on in both models, so what sense does it make to compare IRFs from both models, with IRFs from model 1 displaying more volatility than IRFs from model 2 but model 2 displaying more volatility than model 1 in the end (after hp-filtering) ? I hope the question is understandable.

Many thanks !

Filtering excludes some frequency bands from the filtered series (most clearly visible in the case of a bandpass filter). If most of the volatility comes from frequency bands that are filtered out, a reversal of volatilities is expected.
Oscillations are almost always a sign of a problem with the model (and oscillations are not the same as the frequency). HP-filtering will not affect the IRFs. That is why they are the first statistic you should look at.

Thank you very much for your helpful answer.

Should I conclude that most of my volatility before hp-filtering is “long-term” volatility ? Because if I understand well, filtering enables to keep only “short-term” fluctuations (which is equivalent to high frequency ?).

I thought it might be related to oscillations in my IRFs (which in the specific case of my model make sense) because the cycles in my oscillations (I mean the alternation of above and below steady state values) display quite a long duration so I was wondering if these persistent cycles are not somehow filtered out after hp-filtering.

Also, I saw business cycles papers where they use quadratic detrending rather than hp-filtering : does this make sense ? Is there a systematic way to know what is the best filter method ?

Thanks again !

There is no general rule which trend to use. It is a matter of preference (and your view on how the world works and what constitutes a business cycle). For the reversal you describe to take place, it must indeed be that the one series has most of its power in the low-frequency band that is taken out by the HP-filter.

Sorry, but I have to come back to this old post. I thought it would be quite straightforward to calculate some standard deviations from a model and compare them to those reported in a paper, but now I’m somewhat confused.

The standard deviation of a variable X should be sqrt(variance(X)) and this is also what Dynare reports in theoretical moments as std. dev…

There is also the concept of relative standard deviation, which to my knowledge is the ratio of the standard deviation to the mean. It is often reported in percentage terms, i.e., 100*sqrt(variance(X)) / mean(X). This is also suggested here Generate new statistics and tables

But in another part of the forum (How to get standard deviation in percentage - #3 by lm280299), where someone suggests to obtain the std. deviation in percentage by calculating
100*sqrt(variance(X)) / mean(X), jpfeifer responds

Regarding the transformation you suggest: up to first order and for small volatilities you are correct. You are basically performing a Taylor approximation of log(x) about the steady state xbar inside of the standard deviation

Why is this only correct up to first order and for small volatilities?

And furthermore, I also could not reconcile the standard deviations I got from BasicRBC.mod with those reported in the table on p.4 of “Stylized Facts” by Sims (see above). I see that when I calculate
sqrt(variance(X)/mean(X)) or alternatively, when I write the model in logs and then just take the sqrt(variance(X)), my results get closer to those reported by Sims, but they are still way off. Did anyone succeed in replicating the table from Sims?
And jpfeifer, how did you know that the values in the table are of logarithms of the levels? I couldn’t find this information in the text.

Thanks!

Which numbers exactly do you want to replicate? The first column by Sims is the standard deviation of the logged and HP-filtered variables, the relative standard deviations are the same numbers relative to the one of output, i.e. \sigma_x/\sigma_y

Thanks for your repsonse. Yes, I was trying to replicate the first two columns of the table on p. 4. So, I guess I should get the variance of the model variables in logs and activate the hp_filter in the stoch_simul command (as stated here, HP Filter simulated variables, I don’t have to simulate the model). How can I get an idea of which number to use for the hp_filter?

It’s always hp_filter=1600 in quarterly models.

Thanks! Perfect, now my numbers are the same as in the table!