# Estimation AR(2)-process/Joint prior distribution

Hi everyone,
I am looking for a way to estimate the two persistence-parameters of an AR(2)-process, with the only prior restriction being stability of the process. One of my model equations specifies an AR(2)-process for variable x:

``` x=(1-rho1-rho2)*xbar+rho1*x(-1)+rho2*x(-2)+epsilon;```

where epsilon is an exogenous i.i.d. shock, x is the endogenous variable and xbar is a prespecified fixed parameter equal to the unconditional mean of x. rho1 and rho2 are the two parameters to be estimated for this equation. The only prior restriction I would like to impose for both parameters is stability of this AR(2)-process, i.e. that abs(rho1+rho2)<1. Is there any way to specify something like a joint prior distribution of two parameters, like “The linear combination rho1+rho2 is uniformly distributed”, while being able to recover the individual values of the parameters. The problem is the “less than” sign, which means that I cannot simply express one parameter through the other one. I would greatly appreciate any suggestions.

Johannes

Dear Johannes,

Your stability condition for an AR(2) is incomplete. The conditions are as follows

``````
rho2<1+rho1

rho2<1-rho1

rho2>-1
``````

See Sargent (“Macroeconomic Theory”, Academic Press, 1987) for a proof. The stochastic process is stationary iff (rho1,rho2) is in the triangular surface defined by these inequalities. So if we do not have prior beliefs about these parameters we may choose a (joint) uniform prior over this surface. This is not possible (without hacks in the matlab code) with dynare because we do not have an interface for joint prior distributions.

I think that a more sensible approach would be to estimate the roots of the lag polynomial instead of the parameters of the lag polynomial. In this case the bound conditions are obvious. Note also that uniform priors over these roots will imply non uniform joint prior for the autoregressive parameters over the triangular surface. If you estimate the roots lambda1 and lambda2, you can compute the autoregressive parameters in the steadystate file.

Best,
Stéphane.

Dear Stephane
Thanks a lot for your quick answer. Your suggestion of writing the AR-process in terms of its roots is a good approach to solve the problem. Moreover, it is generalizable to AR-processes of arbitrary order. For convenience, I post sample code for the case of an AR(2)-process with rho1=1.5 and rho2=-0.6 (computations follow Hamilton 1994, Time Series Econometrics, chapter 2-3).

``````var x;
varexo epsilon;

parameters root1 root2 xbar;

xbar=0.25;
//rho1=1.5;
//rho2=-0.6;
root1=0.95; //root1=1.5/2+sqrt((1.5/2)^2+(-0.6))
root2=0.55; //root1=1.5/2-sqrt((1.5/2)^2+(-0.6))

model;
// parameter conversion
# rho1= (root1+root2);
# rho2= - root1*root2;
// model equation
x=(1-rho1-rho2)*xbar+rho1*x(-1)+rho2*x(-2)+epsilon;
end;

shocks;
var epsilon; stderr 0.1;
end;

estimated_params;
root1, 0.95, -0.9999, 0.9999, uniform_pdf, 0, sqrt(3)^(-1)*1.9998;
root2, 0.55, -0.9999, 0.9999, uniform_pdf, 0, sqrt(3)^(-1)*1.9998;
end;``````

The only remaining issue is that this approach exludes complex roots because the uniform distribution has only real support.

Johannes

Yes, here we exclude complex roots in the AR§ shocks (and also the case with multiple real roots). I am not convinced that this is a big issue for DSGE models, because I prefer to obtain endogenous complex eigenvalues in the reduced form transition matrix of the model rather than forcing the existence of complex eigenvalues by putting them in the exogenous part of the model. We may estimate the real and imaginary parts of the roots, the problem is that the dimension of the vector of estimated parameters would not be constant in this case. The current version of Dynare cannot handle this case (we would have to use a reversible jump mcmc).

When I put AR§ in the exogenous part of the model I never estimate directly the auto-regressive parameters. In my opinion this is not a sound idea because the interpretation of these parameters depends on the value of p. For instance rho1 is a persistence parameter in an AR(1) model but not in an AR(2) model. If I don’t want to exclude complex eigenvalues, I estimate the auto-correlation function of, say, the productivity shock instead of the auto-regressive parameters of the productivity shock (The first order auto-correlation has always the same interpretation), and I compute the auto-regressive parameters from the auto-correlation function (Also I do not estimate the variance of the innovation of the shock but the variance of the shock). But here we face the same problem than with the direct estimation of the auto-regressive parameters, we have to exclude the cases where the auto-correlation function is not a positive function (this can be done in the *_steadystate file). Consequently the prior probability mass does not sum to 1 in general, and so if we want to compare models we have to correct the marginal densities for that.

I will post an example when i find some time…

Best,
Stéphane.

Re-posting from Local linear trend estimation so that all the material on this issue is in one place.

I am trying to solve an Unobserved Components model using Dynare, where my trend is a random walk with a stochastic drift and the cycle is an AR(2) process. When estimating the AR parameters I run into stability issues which mean that the estimates are quite far off the values I use to simulate the data.

I tried to resolve this using the nice approach laid out in this post. This seems to work if I only estimate the AR coefficients. However, when I attempt to estimate the variances in addition, then estimates of the AR coefficents (calculated indirectly from the roots) are no longer close to the values used in the simulation.

Code is copied below. Any insight would be much appreciated.

Thanks!

``````var C Y T Dd;
varexo eps_1 eps_2 eps_3;

parameters root1 root2;

//rho1=1.5;
//rho2=-0.6;
root1=0.95; //root1=1.5/2+sqrt((1.5/2)^2+(-0.6))
root2=0.55; //root1=1.5/2-sqrt((1.5/2)^2+(-0.6))

model;
// parameter conversion
# rho1= (root1+root2);
# rho2= - root1*root2;

// model equation
Y = T + C;
C = rho1*C(-1)+rho2*C(-2)+ eps_1;
(T-T(-1))-(T(-1)-T(-2))= Dd(-1) + eps_2-eps_2(-1);
Dd = eps_3;
end;

T = 1;
Y = 1;
Dd=0;
C = 0;
end;

shocks;
var eps_1; stderr 0.1;
var eps_2; stderr 0.1;
var eps_3; stderr 0.1;
end;

stoch_simul(periods=2501, order=1);
save d_obs Y;

//3.2 ML Estimation

estimated_params;
stderr eps_1, 0.01, 0, 1;
stderr eps_2, 0.01, 0, 1;
//stderr eps_3, 0.01, 0, 1;
root1, 0.95, -0.9999, 0.9999;
root2, 0.55, -0.9999, 0.9999;
end;

varobs Y;

estimation(datafile=d_obs, presample=4, first_obs=1, mode_compute=4, mode_check, diffuse_filter); // simulated data (MLE)
``````

In that case you are facing an identification issue.

That is, the parameters are only jointly identified.

OK thanks for this. Do you have any thoughts about how I could take steps towards finding a resolution? The model set-up is a fairly standard Unobserved Components model I believe so I don’t see why the identification issue would arise.

I would also be interested where the error message you printed is displayed. I do not get any error messages when I run the code, just an incorrect estimation of parameters.

All best,
D

I used the identification command, but there is a bug that needs to be fixed first, before you can see the output I got. That is why you currently cannot replicate it. See github.com/DynareTeam/dynare/issues/1105

There is no resolution here. There are models where several parameter combinations are observationally equivalent. Please try going back to your original model not in growth rates. That might help.