Hello. I am running a basic calibrated log-linearized DSGE model, and while the code is running fine and the IRF’s look broadly as expected - the theoretical standard deviation and variance of the log-linearized variables are grotesquely large!

For instance, the regular log linearised consumption variable, c_hat has a variance of 2245 and the inflation log linearized variable, pie_hat has a variance of 302!

Is this something I should be worried about? And if yes, how should I be ideally dealing with this? I tried changing the variance of the shocks from 1 to 0.01, but the variances/standard deviations still remain large.

I faced the same problem, to handle this I didn’t use the theoretical given by Dynare, Instead I calculated them myself using the data available in : oo_.SmoothedVariables , and the values found are different from the ones given by Dynare.

A reason for the large theoretical second order moments may be that you have at least one eigenvalue close to one (unit root). The check command displays the generalized eigenvalues, the output of this command may confirm this possible explanation.

It is not possible to compute the theoretical moments from the smoothed (or filtered) variables. It’s a different concept. Think of a simple AR(1), the sample and theoretical moments are always different for finite sample size.

Hi, I cannot find anything immediately suspicious in your file. Looking that the variance decomposition, there are three shocks that drive almost all of the variance of the respective variables. You should focus on those. Maybe you have a scaling issues here.

Hi - thanks for the feedback. I identified the errant shocks - but I am unsure what to do about them to tackle the ‘scaling issue’.

I tried changing the standard error of all the shocks to 0.01 from 1 originally, and the theoretical moments now look sensible. However, as a collateral consequence, the IRF’s now have a lot of 10^-3 or 10^-4 on the y-axes for the key variables. This is especially problematic, when I am talking about fiscal multipliers in my thesis, wherein everything looks uncomfortably small compared to when I used stderr as 1!

So the question is: (a) Is this the right way to tackle the issue of large theoretical moments - to use stderr 0.01 instead of 1 for all the shocks? And (b) should I be overtly worried about the resulting impact of impulse responses that are 10^-3 or so in scale.

The tricky issues is finding out why some shocks generate IRFs that are bigger by a factor of about 100 for a presumably similar shock size. There are two possibilities here:

For some reason, the shock size is actually not comparable (e.g. interest rates were already scaled with 100)

For some reason, the propagation mechanism to some shocks is wrong.

Hi…thanks a lot for the feedback. I am unfortunately still fuzzy on troubleshooting this - can you possibly elaborate on how the ‘propagation mechanism’ for shocks can go wrong.

On the wider point - if I am running a calibrated model, does it really matter if I do not resolve this issue, and the theoretical moments continue to have high values? I understand that it is not ideal, but my objective is really to study the IRF’s, and not necessarily take the model to data at this point. And all the associated results (like fiscal multipliers for instance), seem unaffected if I have stderr as 1 (when moments explode in size) or 0.01 (when moments are more under control).