I have finished the estimation part of my model (I let dynare do the linearization) and I’m currently analysing the results of dynare’s shock decomposition. I have pulled out the decomp. results, for one specific variable and a certain time frame. You can see that below:
Q1) If I want to compute the contribution (deviation), of lets say shock 1 in year 2007, would I sum up the quarters or take the average over the four quarters?
Q2) For example in the paper “Sentiment Shocks as Drivers of Business Cycles”, Arias shows a table with computed percentages of each shock contribution. Given my example above, how would I calculate the percentage contribution of lets say shock 1. The problem is that there are positive an negative values and the initial value.
Sorry for the very basic questions and thanks a lot for your help!
The way to do time aggregation depends on the variables, e.g. if they are in levels, logs or whether they are growth rates. Most of the time, the average will be appropriate (as in average deviation of the level from its mean/trend during a particular year)
What Arias shows in hist Table 2 is a variance decomposition, not a shock_decomposition. Those are different things and they are triggered differently in Dynare. See the manual.
Sorry, one last question regarding the shock_decomposition command. I have estimated my model on a dataset which doesn’t include GDP. Is it still possible to perform a shock decomposition on this observable? For example, lets say I include output in my datafile, estimate the model on all other observables (except GDP) and later declare the observable output under the shock_decomposition command. Would that work?
Just for clarification. If I perform a shock decomposition after the stoch_simul command, then I will get the historical decomposition at the calibrated posterior mean. However, if I perform it right after the estimation command then I will get the mean shock decomposition. Is that correct (like in the case for the conditional variance decomposition)?
After stoch_simul, it will be at the calibrated parameter values used for stoch_simul. After estimation, it will be at mean posterior parameter draw. There is no Bayesian treatment of this command that would give you the average of the statistic of interest over the parameter draws (because then the average shocks over the draws would not add up to the observables).
Sorry, does this mean that if I have calibrated my model at the posterior mean (and run the shock decomposition command) then I would obtain the same results as running the shock decomposition after the estimation command? So both decompositions deliver the same output. Or is that wrong?
Another embarrassing question… Can we say that the shock decomposition, which dynare produces, should correspond to the estimated IRFs? Or should be consistent with the IRFs?
So for example, if a positive productivity shock hits the economy, then this would trigger a positive response of GDP to this innovation. Let’s say I perform now a shock decomposition on output. Would we expect, that during a time of increasing output (e.g. positive deviation from its steady state), the TFP shock has contributed positively to this upward movement?
At each point in time, shocks hit the economy and trigger IRFs. The shock decomposition tells you about the cumulated effect of e.g. past TFP shocks on a variable at each point in time.
Regarding your question: if TFP shocks are the only shock that moves GDP, you can make the conclusion. Otherwise, what you say is not necessarily true, because it could have been another shock as well.