Simulated moments

When the moments are calculated from simulations, they are calculated from a single simulation (of size=periods), correct? Is there quick way to take the average of multiple simulations to get the moments? Thanks.

Let me ask some related questions, and maybe I can get answers to my previous questions as well.

Should the moments generated from a single simulation approach the true theoretical moments as we increase periods?

If so, why do some papers generate moments from multiple simulations each of which have a small number of periods (typically equal to the number of periods in the data)?

I guess this is related to the distinction between consistency vs. unbiasedness of an estimator. So the two methods (single simulation with infinte periods vs. the mean of infinite simulations with finite periods) should give the same answer, right?

Thanks for any help.

Hi otb,

[quote=“otb”]Let me ask some related questions, and maybe I can get answers to my previous questions as well.

Should the moments generated from a single simulation approach the true theoretical moments as we increase periods? [/quote]

Yes, this should be true if the DGP is stationary.

Let us consider a simple example. Suppose that our DGP is the following AR(1) stochastic process:

y_t = c + \rho y_{t-1} + \epsilon_t

where |\rho|<1 and {\epsilon_t} is a mean zero white noise with variance \sigma^2, so that {y_t} is an asymptotically second order stationary. One can easily show that the asymptotic first and second order moments (the theoretical moments in Dynare words) are:


   E_{\infty}[y_t] = \frac{c}{1-\rho}

   V_{\infty}[y_t] = \frac{\sigma^2}{1-\rho^2} 

Now if we suppose that there exists an initial condition, say, y_0 = 0, the moments will be different from the asymptotic moments because they depend on the initial condition. To understand the reason we just have to note that :


   E[y_1] = c + \rho E[y_0] = (1 + \rho) \times c

   V[y_1] = \rho^2 V[y_0] + \sigma^2 = (1+\rho^2) \times \sigma^2

   E[y_2] = c + \rho E[y_1] = (1 + \rho + \rho^2) \times c

   V[y_2] = \rho^2 V[y_1] + \sigma^2 = (1+\rho^2+\rho^4) \times \sigma^2

and more generally


   E[y_t] =  (1 + \rho + ... +\rho^t) \times c

   V[y_t] =  (1+\rho^2+ ... + \rho^{2t}) \times \sigma^2

or equivalently:


   E[y_t] =  \frac{1-\rho^{t-1}}{1-\rho} \times c 

   V[y_t] =  \frac{1-\rho^{2(t-1)}}{1-\rho^2} \times \sigma^2

Obviously when t goes to infinity these moments converge to the asymptotic moments. The speed of convergence depends on the value of the autoregressive parameter. The closer to one is \rho, the lower will be the convergence to the asymptotic moments.

One can check that the formulas for the expectation will be affected if we change the initial condition. For instance, if we set y_0 equal to E_{\infty}[y_t] instead of 0 we can see that the expectation becomes time invariant. More generally if the initial condition is a random variable (with non zero variance) the formulas for the second order moment will also be affected.

To sum-up, simulated moments have to be different from asymptotic moments because of the influence of the initial condition on the moments (and also, but that is obvious, because of sampling issues).

Averaging over different path simulations starting from different initial conditions is a way to overcome this difficulty. Ideally we should randomly select initial conditions in a distribution with expectation E_{\infty}[y_t] and
variance V_{\infty}[y_t], but this is not possible since we don’t know these moments in the first place.

A single simulation with an infinite number of periods will give results different from the mean of an infinite number of simulations of length t<\infty (because the process is only asymptotically second order stationary) except if the initial conditions are obtained from the ergodic distribution of the stochastic process (which is unknown). A single simulation with an infinite number of periods will converge to E_{\infty}[y_t]
and V_{\infty}[y_t] in probability, while the mean of an infinite number of simulations of length t will converge in probability to E[y_t] and V[y_t] (these moments will depend on the assumption made about the distribution of the initial condition).

Best Regards,
Stéphane.

Thanks for the detailed response Stephane. This was very helpful. Since I want asymptotic moments, I should be fine.

But in any case, does Dynare conduct multiple simulations? I don’t think so, but I just want to make sure and also see if somebody had a quick fix to actually make Dynare do that (perhaps write a loop in the mod file).
Thanks again.

otb

[quote=“otb”]Thanks for the detailed response Stephane. This was very helpful. Since I want asymptotic moments, I should be fine.

But in any case, does Dynare conduct multiple simulations? [/quote]

No. But you can code a matlab loop around the stoch_simul command.

Best, Stéphane.

Hi,
I have a related question. I wrote a loop around stoch_simul(hp_filter = 1600, order = 1) over a continuum of values for the parameter of my interest. I am interested in the theoretical moments of each parameter’s iteration. They are printed in the command window, BUT I would like to have them stored somewhere for each iteration: those ones contained in oo_ refer only to the last iteration, but I would like to have them stored for each iteration. How could I solve this? Thanks for your help.

This is part of the loop I have written:
…]
steady;
gammas = 0.3:0.05:0.5; // weights of risk
for s = 1:length(gammas);
gamma_y = gammas(s);
stoch_simul(hp_filter = 1600, order = 1, nograph);
if info;
disp('Computation fails for gamma_y = ’ num2str(gamma_y)]);
end;
// std deviation here

Add an else clause where you save the contents of oo_, i.e. something along the lines of

if info; disp('Computation fails for gamma_y = ' num2str(gamma_y)]); else moments{s,1}=oo_; end;

Thanks for the reply, it works. One last question: If I want to extrapolate from this structure only the standard deviation of each iteration, how the code would look like (converting into .mat) ? Thanks.

It depends what exactly you need. Variances are stored in oo_.variance, so

if info; disp('Computation fails for gamma_y = ' num2str(gamma_y)]); else moments(:,s)=diag(oo_.var); end;

I get "Reference to non-existent field ‘variance’, but I changed the command to moments{s,:} = oo_.var; which gives me as a result a <5X1 cell>. Each cell is a <1X1 struct> inside which there is the variance of each variable. Ideally, what I exactly need is a way to put these variances for each into a matrix M(n,s).

Hope I have been clear. Thank you again for your useful help.

My mistake. I updated the code above. Note the use of round brackets instead of curly ones.

Thanks a lot, it’s perfect.

[quote=“jpfeifer”]It depends what exactly you need. Variances are stored in oo_.variance, so

if info; disp('Computation fails for gamma_y = ' num2str(gamma_y)]); else moments(:,s)=diag(oo_.var); end;[/quote]

Hi, what if instead of the variance I want to get the covariance matrix? I tried with the following command:

covars(:,s)=triu(oo_.var);

but I get an error message due to dimension mismatch.

Thank you very much.

triu provides a matrix, but the left-hand-side expects a vector. That cannot work. You want something like

I think I have not been clear enough.
Recalling that n is the number of variables and s is the number of iterations, in my first question I wanted to see how the variance changes for each iteration. So, the command: moments(:,s)=diag(oo._var) did the right job by providing as output a nXs matrix. What I exactly need now is to have the covariance, but I want it for each variable with respect to all the others AND for each iteration. The command: covars=triu(oo._var) does’t do the job because I get a nX1 vector.

Maybe what I need is the first raw of (oo.var), which should be the covariance of the first variable with respect to all the others. Then I could repeat manually the second raw, the third, and so on. But I don’t know how to get this first raw from (oo.var) instead of its diagonal, how to get something like: covars(:,s)=first_raw(oo._var).

Would it be correct the following command?: covars(:,s)=(oo_.var(1,:))

Thank you for your replies.

[quote=“quiko”]I think I have not been clear enough.
Recalling that n is the number of variables and s is the number of iterations, in my first question I wanted to see how the variance changes for each iteration. So, the command: moments(:,s)=diag(oo._var) did the right job by providing as output a nXs matrix. What I exactly need now is to have the covariance, but I want it for each variable with respect to all the others AND for each iteration. The command: covars=triu(oo._var) does’t do the job because I get a nX1 vector.

Maybe what I need is the first raw of (oo.var), which should be the covariance of the first variable with respect to all the others. Then I could repeat manually the second raw, the third, and so on. But I don’t know how to get this first raw from (oo.var) instead of its diagonal, how to get something like: covars(:,s)=first_raw(oo._var).

Would it be correct the following?: covars(:,s)=(oo_.var(1,:));

Thank you for your replies.[/quote]

Then you are looking for something like

temp=triu(oo_.var,1) covars(:,s)=temp(temp~=0);
Note that the vectorization is in row major order, i.e. you get the covariance of 1 with 2 first, then 1 with 3, 2 with 3, 1 with 4, 2 with 4 etc.

Hi again,

A further follow-up question about this. Instead of one loop over a parameter, I now have an additional loop over another parameter. Thus, two different loops in the same simulation. So I have:

gammas = 0.3:0.1:1;
for s = 1:length(gammas);
gamma = gammas(s);

alphas = 0.3:01:1;
for z = 1:length(alphas);
alpha = alphas(z);

stoch_simul(hp_filter = 1600, order = 1, nograph);

end;
end;

Previously, I extracted the variance by doing:
moments(:,s)=(diag(oo_.var));

How should I change the above command to account also for the other loop? I tried to do like this:
moments(:,t)=(diag(oo_.var)); where t = sz. But I would like to put my results into a matrix of dimension sz in which each element is the variance evaluated at a the specific pair of (s,z).

The variances are a vector (1 dimension). Now you want to store this vector along two additional dimensions. The result will be three dimensional. How can you store it in a two dimensional matrix? This would only be possible if the variance is a scalar.

Thanks. What about if I only look at one variable? For example, the command moments(:,t) gives me all variables in the rows. If I look only at one variable, should I be able to get the variance stored in a matrix of dimension (s,z)? That is, each element of the matrix to be the variance evaluated at a specific pair (s,z)?
I exactly need a three dimension matrix because I would like to make a 3D plot. (I want to see how the variance changes when both parameters change at the same time).