I have annual data of GDP deviations from a trend, expressed in percent of trend GDP. I would like to use this data to calibrate shocks in a quarterly model (unfortunately I can’t re-calibrate the model to be in annual frequency as well). In particular, I would need to calibrate the shock’s standard deviation and its AR(1) coefficient.
Regarding the standard deviation, my understanding is that I just need to calibrate the shock such that the resulting (quarterly) standard deviation of GDP in the model (expressed in percent of steady state output) equals the (annual) standard deviation in the data. I suspects this works because the for this type of variable, the correct way to aggregate quarterly moments into annual moments is just to take an average
( this is how I would understand p. 4232 of Born and Pfeifer, https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.104.12.4231 ).
Does this approach sound reasonable to you? And, furthermore, does somebody have an idea how I could use my data to calibrate the AR(1) coefficient of my shock? Intuitively, a given AR(1) parameter implies a higher persistence in annual than in quarterly data. Does someone know how to go about this?
Many many thanks for any help!
You need to be careful. The averaging in our paper refers to the percentage deviations from trend, i.e. the mean. But you are interested in the standard deviations, which depend on the covariances. Given that you have an AR-process, these are not 0 and you outlined approach will fail.
You could work out the formulas analytically, but my experience is that it is often quicker to simply simulate the process.
What are your targets for the annual standard deviation and the autocorrelation in annual data?
Thanks for your reply Johannes, this is very very helpful!
I think I did not make myself clear in my original post. I have a quarterly model but annual data, and I want to use some moments of this annual data to calibrate my model.
First, I want to calibrate the standard deviation of one of my shocks in the model such that the resulting output gap volatility matches its counterpart in the data (I have annual output gap data). The output gap in my model is the deviation of GDP from its flexible-price level, expressed as a percent of the flexible-price level. So what I did is simply to “annualize” my model-generated quarterly data by taking averages over every 4 consecutive quarters. Then, I compare the standard deviation in the annual data with the standard deviation in the annualized simulated data. I thought this is in line with the approach you outlined in your paper – or am I mistaken?
Second, I wanted to calibrate the persistence of some shock in my model such that the resulting autocorrelation in output matches the one I observe in the (annual) data. Here I followed the same approach: I annualized the model-generated data by averaging over every 4 consecutive quarters, and then I compare the first-order autocorrelation of output with the one I have in the (annual) data. Does this make sense to you?
Kind regards and thanks again!
I see. I thought you were talking about targeting the parameters of a purely exogenous process.
The question you now have is how to go from a quarterly percentage deviation to an annual one. You are right that we showed in our comment that in this case taking the average is correct. Think about is this way: say steady state or potenial output is 100 apples per quarter, i.e. 400 apples per year. If you are only getting 99 apples in the first quarter, you have a one percent output gap. But for the year, you are only missing 1 apple out of 400, which is 1/4 percent.