Variance decomposition for correlated shocks

Hi, Dr. Pfeifer

I wonder how I can compute the variance decomposition of two correlated shocks?

Say, we have 3 such AR(1) shock processes Y = Y(-1) +eA and X = X(-1) +eB and Z = Z(-1) +eC. eA and eB are correlated innovations. eC is independent.

That is:

eA = u +u1 and eB = gam*u +u2 as in this thread I found: Correlated shocks and impulse response functions

If eA and eB are correlated but eC is independent from eA and eB , I wonder how I can compute the variance decomposition of eA eB and eC in this case ?

Because the covariance matrix is not diagonal, the shocks are not orthogonal. Therefore you must use an orthogonalization scheme. This will provide you with a range of possible values by using the upper and lower Cholesky decomposition. Dynare by default uses the upper Cholesky decomposition in case of correlated shocks.

Dear Johannes,

Could I ask further on this topic?

If u and e are 2 correlated shocks, then the unconditional variance decomposition results (after stoch_simul option) report that shock u accounts for 60% of fluctuation of output. Could I ask how I should explain this number?

Kind regards,
Huan

In that case, you assume that all comovement of the two shocks is caused by the shock ordered first, i.e. you say that the first declared shock is partially transmitted via the last shock.

Many thanks.
Could I ask one more question on this?

If A and B are **correlated **two shocks, all comovement caused by first ordered shock A and I would like to do counterfactual exercise------see what will happen when only one shock (e.g ,shock B) hits the model. So I take all estimated shock B series into the model and shut down shock A. I am wondering if this is right since in the counterfactual exercise I do not consider the truth that estimated shock B series is partially affected by shock A.

Kind regards,
Huan

If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=[A;B]*R
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by postmultiplying the left hand side by R^{-1}.

1 Like

[quote=“jpfeifer”]If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=R*[A;B]
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by premultiplying the left hand side by R^{-1}.[/quote]

Many thanks. Could I ask further?

  1. R is the upper Cholesky matrix from the diagonal and upper triangle of variance covariance matrix. Could I ask if the variance covariance matrix is NOT be positive definite, how should I do?

  2. As for shock (historical) decomposition , dynare will do cholesky decomposition to variance covariance matrix in default so I can directly use shock decomposition results generated by dynare . To see which shock is more important,I can report shock decomposition OR counterfactual exercise, but generally which one is better/preferred?

3.Since changing the order of correlated shocks in varexo block, then the theoretical unconditional variance decomposition (after stoch simul) changes much… I am wondering which result should be reported. Is that depending on assumption like which shock actually causing comovement?

Kind regards,
Huan

First, please note that I adjusted the previous posts to reflect the Dynare ordering that the first declared shock accounts for the covariance, not the last one.

  1. What Dynare does is

i_exo_var = setdiff([1:M_.exo_nbr],find(diag(M_.Sigma_e) == 0 )); nxs = length(i_exo_var); chol_S = chol(M_.Sigma_e(i_exo_var,i_exo_var));
i.e. only the non-zero diagonal entries are decomposed.
2. That depends on your preferences. In principle, those two things are the same.
3. Yes, exactly. With correlated shocks the ordering matters. If you don’t have a strong prior what the correct ordering is, the usual way is to invert the ordering and report the resulting range.

[quote=“jpfeifer”]If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=[A;B]*R
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by postmultiplying the left hand side by R^{-1}.[/quote]

Many thank, Johannes.

Could I ask 2 more ?

  1. if I want to only consider the effect of the shock A ordered first, should I do
    [res_1; res_2]=[A;B]*R
    where R is now the lower Cholesky matrix?

  2. In shock(historical ) decomposition figure, (first ordered )shock A 's area is just caused by shock A ,while (last ordered) shock B’s area is not only caused by B but also caused by A( comovement with A)? right?

Kind regards,
Huan

Sorry, I had to do another correction above.

  1. No, if you were doing this, you would get a wrong covariance matrix and would be trying to invert the ordering.
  2. No, for the first shock, you get the full effect of this shock, even the part that is mediated by the other shocks (due to the causal assumption behind the ordering). For B, the second shock, it would be the effect of B after purging the effect A has on B.

[quote=“jpfeifer”]If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=[A;B]*R
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by postmultiplying the left hand side by R^{-1}.[/quote]

Many thanks again.

  1. If I want to only consider the effect of A ordered First in counterfactual exercise , I** do not **need do Cholesky decomposition since A is not affected by other shocks( even though A is correlated with other shocks), so I simply take smoothed A series into the model will be OK, right?

In counterfactual exercise,should people take smoothed innovation u or smoothed variable x into model to see what will happen when there is only one shock x hitting the economy?

Kind regards,
Huan

[quote=“jpfeifer”]Sorry, I had to do another correction above.

  1. No, if you were doing this, you would get a wrong covariance matrix and would be trying to invert the ordering.
  2. No, for the first shock, you get the full effect of this shock, even the part that is mediated by the other shocks (due to the causal assumption behind the ordering). For B, the second shock, it would be the effect of B after purging the effect A has on B.[/quote]

Many thanks. In addition to several questions in the last post, could I also ask:

I find that even I change the order of 2 correlated shocks, the shock_decompostion figures are NOT changed (of course shocks order in the figure changes, but each shock has exactly same length in same period no matter ordered first or last). Does that make sense?

As for** unconditional variance decomposition after stoch_simul **, for the first shock, I get the full effect of this shock; while for B, the second shock, it would be also the effect of B after purging the effect A has on B?

Kind regards,
Huan

[quote=“jpfeifer”]If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=[A;B]*R
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by postmultiplying the left hand side by R^{-1}.[/quote]

Dear Johannes,

I am also interested in this topic. To back out A and B for counterfactual exercise using this method will result in A and B series MUCH MORE (unreasonably) Volatile than observed residuals…would that be a problem or is there any problem of scaling?

Thanks in advance.

Catherine

@ZBCPA:

  1. The correct treatment depends on what your aim is. You seem to be worrying about a historical counterfactual. There, things are different. See my point 3.

  2. If you consider a fixed parameter set, there is a one to one mapping between smoothed innovations and the smoothed x. If you start with the smoothed initial value for x and feed in the smoothed shocks for u, you will get the historical series for x. When feeding this into the model, you usually feed in the u, but if x is exogenous, you could also use x (although this might be complicated to do if your model is written with x(-1) and u in there)

  3. Regarding shock_decomposition: I was too quick and therefore wrong. shock_decomposition displays the counterfactual time series based on the historically encountered shocks. For these, it makes no sense to orthogonalize them, because that is just the way the shocks happened. It is not a statement about deep causation between correlations, but about shocks that occurred. Even theoretically uncorrelated shocks can have correlation in short samples.
    That is why shock_decomposition (which relies on in-sample shocks), is not affected by the variables ordering - in contrast to the variance_decomposition, which is a theoretical property relying on asymptotics.

  4. for the variance decomposition: Yes, the shocks are orthogonalized. The value for the first shock will include its direct effect on variables plus any indirect effect it has via other shocks it is assumed to move. The second shock in turn will be orthogonalized with respect to the first one. Its effect will therefore only include its direct and indirect effects via other shocks after accounting for the portion that is transmitted via the first shocks’s effect on the second one. This goes on until the last shock only has a direct effect via the part not already explained by the shocks ordered before it.

@ Catherine:
Without knowing what exactly you are doing, it is hard to tell what is going on. But there is always the issue of whether you decompose the correlation or covariance matrix. Doing a decomposition like this on historical data is also unusual in the context of DSGE models, because the shocks are already identified - in contrast to VARs. Thus, the only reason for orthogonalization is for ease of interpretation for some theoretical properties like a variance decomposition (and to generate correlated random numbers)

[quote=“jpfeifer”]@ZBCPA:

  1. The correct treatment depends on what your aim is. You seem to be worrying about a historical counterfactual. There, things are different. See my point 3.

  2. If you consider a fixed parameter set, there is a one to one mapping between smoothed innovations and the smoothed x. If you start with the smoothed initial value for x and feed in the smoothed shocks for u, you will get the historical series for x. When feeding this into the model, you usually feed in the u, but if x is exogenous, you could also use x (although this might be complicated to do if your model is written with x(-1) and u in there)

  3. Regarding shock_decomposition: I was too quick and therefore wrong. shock_decomposition displays the counterfactual time series based on the historically encountered shocks. For these, it makes no sense to orthogonalize them, because that is just the way the shocks happened. It is not a statement about deep causation between correlations, but about shocks that occurred. Even theoretically uncorrelated shocks can have correlation in short samples.
    That is why shock_decomposition (which relies on in-sample shocks), is not affected by the variables ordering - in contrast to the variance_decomposition, which is a theoretical property relying on asymptotics.

  4. for the variance decomposition: Yes, the shocks are orthogonalized. The value for the first shock will include its direct effect on variables plus any indirect effect it has via other shocks it is assumed to move. The second shock in turn will be orthogonalized with respect to the first one. Its effect will therefore only include its direct and indirect effects via other shocks after accounting for the portion that is transmitted via the first shocks’s effect on the second one. This goes on until the last shock only has a direct effect via the part not already explained by the shocks ordered before it.

@ Catherine:
Without knowing what exactly you are doing, it is hard to tell what is going on. But there is always the issue of whether you decompose the correlation or covariance matrix. Doing a decomposition like this on historical data is also unusual in the context of DSGE models, because the shocks are already identified - in contrast to VARs. Thus, the only reason for orthogonalization is for ease of interpretation for some theoretical properties like a variance decomposition (and to generate correlated random numbers)[/quote]

Thank you so much Johannes!

Since shock_decomposition **does not **orthogonalize shocks while variance decomposition does, then it is possible that shock A in shock_decomposition accounts for much of output fluctuations while in variance decomposition shock A only accounts for very little. Would there be any problem in this case since the two results are not consistent with each other?

Kind regards,
Huan

No, that is not a contradiction. The FEVD just tells you that most of the effect of A observed was most likely caused by fluctuations in another shock B that was correlated with A (because you assigned causality that way)

[quote=“jpfeifer”]If you want to only consider the effect of the shock B ordered last, you need to purge the residual of that equation from the effect of shock A that affects both equations.
Say the observed residuals are given by
[res_1; res_2]=[A;B]*R
where R is the upper Cholesky matrix. Then you can back out the shocks A and B by postmultiplying the left hand side by R^{-1}.[/quote]

Dear Johannes,

Could I ask you further on this? Could you please take a look at the following example?

[code]var Gov, Tfp;
varexo g, z;
parameters rho;
rho = 0.95;

model;
Gov = rhoGov(-1) + g;
Tfp = rho
Tfp(-1) + z;
end;

shocks;
var g; stderr 0.0039;
var z; stderr 0.0086;
corr e, u = 0.3868;
end;

stoch_simul(order=1);[/code]

The innovation g and z are correlated with correlation=0.4
Then the covariance matrix is A=[0.0039^2 0.00390.00860.3868; 0.00390.00860.3868 0.0086^2];

R=chol(A)
inv®= 256.4103 -107.5509; 0 126.0937]

[obs_g;obs_z]=[g;z]*R

So I need use the observed smoothed innovations timing inv® to back out [g;z].

However, you see, inv® has very large scale, will this generate any problem since [g;z] then will have much different scale from smoothed ones?

Many thanks!

Kind regards,
Huan

I am not sure I am still following our discussion. If you mulitply the two correlated shock processes with the inverse of the Cholesky decomposition of the covariance matrix, you will end up with two uncorrelated standard normal shock series. The difference in size you are talking about is thus required to bring the variances to 1.