Variance decomposition with perfect foresight (simul)

Dear all,

I am not even sure is it conceptually possible, but, is there any way to compute a variance decomposition for the endogenous variables following a deterministic simulation? It should simply report the fraction of explained variability following the simulated shocks.

Thank you so mucj

It depends on particular setup. But you could compute the variance of the data without any shock, i.e. just the convergence to a steady state. Then you compute the variance for the full simulation. The share of variance explained by the shocks would be the difference of the first two over the last one.

Thank you so much!

Unfortunately, I am not sure to know how to simulate the convergence to a steady state. If I simulate the model without shocks what I get are just flat lines at the steady state. Are you suggesting to set different initval and endval?

Morevoer, how do I evaluate the variance explained by each single shock? I guess I cannot add the shocks one by one as there are interactions among them.

Thank you!

This means your initial and terminal steady state are identical. So all of the variance (100%) is explained by the shock

Yes, exactly. I normally run the simulation with several shocks.

Is there any way to measure the contribution of each single shock?

Thank you!

Yes. Simulate the model with one shock at a time and compare its variance to the overall variance when using all shocks. Due to nonlinearity, the sum will not be 100% but should be reasonably close.

Thank you so much Johannes! Very helpful as always.

Last question, applying the same logic, would it make sense to compute this also period by period? This would produce a sort of historical decomposition for the simulated (future) deviations of the endogenous variables from the steady state.

It would not make sense to compute a variance for one period. But if you are only looking at the variable levels, then yes, you are doing a historical decomposition.