# Difference between different types of moments

Dear Jpfeifer,

I am confused about the difference between theoretical moments and empirical moments. I use dynare to solve a log-linearized model. I know that if I set “period” command equals zero, theoretical moments will be computed. My understanding is that theoretical moments is the moments generated from the posterior distribution, while empirical moment is simulated moment, which is generated from the posterior mean and start to simulate at the steady state. Am I right?
BTW, I also need to compare the moments generated from the model and that of the actual data. Which type of moments should I use?

No. Generally, theoretical moments are the one computed based on the state space representation of the solved model. See e.g. Hamilton (1994): Time Series Econometrics. These moments are computed at the parameter vector given, i.e. either the calibrated ones when running stoch_simul directly or the one from estimation if running stoch_simul after estimation.

Simulated moments are based on simulation from the model solution for a specified period of time. If you run sufficiently many periods, the simulated moments will converge to the theoretical moments (asymptotically).

There is no general rule what is better to compare. Several short simulations of the same length as your actual data or the theoretical moments. I would personally go for the latter.

1 Like

Hi,

A related question. When solving a log-linearized model, I run a loop over some parameter and then I look at the variance generated for those different parameters.

I extract the second moments by doing:
var = diag(oo_.var);

Then I plot var as a function of the parameters over which I have looped. So here my (silly) question is: what is the unity of measure of this volatility? Should I worry if I observe big numbers like 21,400 on the y-scale?

It depends on the definition of your variables.

If they are in levels, the variance is measured in the units of the levels sqared. For example, think of real GDP as apples then the variance will have units of apples squared. The standard deviation will have the units of apples. You can see that the variance will be massively affected if you change the underlying units. If you measure GDP in quarter apples instead of apples, the variance will increase by Var(4x)=16Var(x) by a factor of 16. Hence, depending on the unit of measurement, the number can be really big.

If you logged the variables/they are in percentage deviations, i.e. unit free. The units will be percent squared for the variance and percent for the standard deviations. Here, a large variance in the four figures you report is problematic.

The model is log-linearized around the steady-state, so if I got it right my variance is percentage deviation from the steady state. I’m not actually interested to the magnitude of the variance, so in principle having big numbers does not bore me, I am interested to the dynamics.

But I wonder wether by “problematic” you mean that it is a mistake and I should fix this or not.

How did you scale your shock terms? Is 1 a standard deviation of 1 percent or 100 percent? If it is the latter, there must be something wrong.

I have put:

shocks;
var e; stderr 1

I don’t know if that’s what you mean. Should it be 0.01?

That’s what I meant. Compared to using
shocks;
var e; stderr 0.01;
end;

every variance will be scaled by 100^2 which might explain your big numbers-

Thank you I see it now.

I have another small question, but it’s not related. Please let me know if I should post it in another section.

The question is:

If I want to introduce two shocks (a technology shock and a monetary shock), assuming both shocks have the same functional form:
a = rho*a(-1) + e;

Is it correct to add “a” (I assume a log-linearized model) to the production function and the Taylor rule? Or I should use two different notations for both shocks, even if they have identical functional form?

You need to add two shocks. Otherwise, the two shocks will be identical, i.e. monetary and technology shocks would be perfectly correlated.

Dear professor Pfeifer,

I wish you could help to fix the Dynare code to perform a replication of the paper by Nakamura and Steinsson “Fiscal Stimulus in a Monetary Union: Evidence from a Monetary Union”, AER, '14. In particular, I am having troubles with the section F of their online appendix (aeaweb.org/aer/app/10403/20111109_app.pdf), where they introduce capital in their New-Keynesian model. Here capital is assumed to be owned by households, who lend it to firms which use it for investment. So there is a capital rental rate, investment, capital, a part of if which is lent etc. My problem is related with the Jacobian of the steady states…could you please help me in adjusting the file?

Thanks a lot and best regards,

Davide
Regional_Capital_Market.mod (4.92 KB)

Focus on:

```Warning: Some of the parameters have no value (A_c, a_c, a_pi, zeta, sigma_c, zeta_c, zeta_g) when using steady. If these parameters are not initialized in a steadystate file, Dynare may not be able to solve the model.. ```

I am sorry, but these parameters have all been set, except with sigma_c, which now I fixed. But still it doesn’t work!

couldn’t there be some deeper problems due to the fact that I did not select the right identifying equations?

Best,

Davide
Doctoral student
at ECARES – ULB
Regional_Capital_Market.mod (4.99 KB)

Your parameter initializations are not recursive. Try using F9 to execute the initializations before the model-block to see that zeta uses psi_nu in its definition, but psi_nu is only defined later on.

what exactly do you mean by executing the first block with F9?

See [Problem with .mod file)

hey all, i absolutely need you to help me fixing this code

problem with finding the steady states and sometime blanchand and kahn condition non verified

davide
Regional.mod (6.45 KB)

Put

```resid(1); ```
before the steady command. Your model is supposed to be linear, but given initial values of 0, resid(1) returns:

[quote]Residuals of the static equations:

Equation number 1 : 0
Equation number 2 : 0
Equation number 3 : 0
Equation number 4 : 0
Equation number 5 : 0
Equation number 6 : 0
Equation number 7 : 0
Equation number 8 : -0.03475
Equation number 9 : -0.03475
Equation number 10 : 0
Equation number 11 : 0
Equation number 12 : 0
Equation number 13 : 0
Equation number 14 : 0
Equation number 15 : 0
Equation number 16 : 0
Equation number 17 : 0
Equation number 18 : 0
Equation number 19 : 0
Equation number 20 : 0
Equation number 21 : 0
Equation number 22 : 0
Equation number 23 : 0
Equation number 24 : -1
Equation number 25 : -1
Equation number 26 : -0.99
Equation number 27 : -0.99
Equation number 28 : -0.5
Equation number 29 : -0.5
Equation number 30 : 1
Equation number 31 : 1
Equation number 32 : 0
Equation number 33 : 0
Equation number 34 : -0.009537
Equation number 35 : -0.009537
Equation number 36 : -1
Equation number 37 : -0.5
Equation number 38 : 0
Equation number 39 : 0
Equation number 40 : 0
Equation number 41 : 0
Equation number 42 : 0
Equation number 43 : 0
Equation number 44 : 0
Equation number 45 : 0[/quote]

The non-zero equations are the ones where the linearization must be wrong or wrongly entered.