AR(1) process of shocks


Hi all,
For the AR(1) process of shocks in DSGE model, should we put also the constant term into the AR(1) equation, for example:
lnA = (1 - rhoA) + rhoA* lnA(-1) + stdeA*eA (1)

Or it is fine to just use: lnA = rhoA* lnA(-1) + stdeA*eA (2)

The paper I am trying to replicate using the 1st way (with constant term) but I see almost everyone use the 2nd one (without constant term). So I wonder could I just ignore the constant term to proceed, since it more popular and more convenient?



The two models are different. In the first case the unconditional expectation of \log A_t is equal to one (which is weird because it means also that the deterministic steady state of A is the Euler number e, while in the second equation the unconditional expectation is 0 (ie the deterministic steady state of A is one). If you ignore the constant, you will change the steady state of the model.



Hi Stephane,
Thanks for your reply. I just checked. So the original model is in level form, not log form,
which is:
A = (1 - rhoA) + rhoA* A(-1) + stdeA*eA (1)
So I will have steady-state A = 1, the same as the above 2nd equation.

So, in this case, are two expression equivalent?

Thank you,


Variables A will then have the same deterministic steady state in both equation, still the unconditional expectation and variance of A will be different. Also by construction A is necessarily positive in the second equation but not with the first equation. This is not a problem if you do a first order approximation of the model, but could be an issue if you simulate the model with the extended path approach.



Let me stress the last point by @stepan-a. If your model is solved at first order, the linear level version
A_t = (1 - \rho_A) + \rho_A A_{t-1} + \sigma_A \varepsilon_A
and the log-normal version
\log A_t = \rho_A \log A_{t-1} + \sigma_A \varepsilon_A
are equivalent. At higher order or for nonlinear solution approaches, there will be differences due to Jensen’s Inequality.


why technology shocks in DSGE models have ar(1) process?


Because they are the most parsimonious covariance stationary stochastic process that is able to capture the dynamics in observed TFP series.


Hi, Parsimony per se would suggest to use white noises instead of AR(1) processes. But we know that this would deteriorate the fit of the model. We need additional persistence in the exogenous variables to match the persistence in the data. So it is a matter of parsimony and fit. Note that it is not granted that AR(1) processes will do the job. You could estimate a collection of models with different ARMA(p,q) specifications for the exogenous variables, and select the model (hence the specification of the statistical model for the exogenous variables) that maximizes the marginal density of the sample. This Bayesian measure used for model comparisons, selects the best model in terms of fit by favouring parsimonious specifications.



Thank you for your consideration.


@stepan-a’s answer refers to how you should ideally proceed from a model fit perspective. My answer referred to the path dependence in the literature, i.e. how it has been done and continues to be done. The early RBC papers fitted a stochastic process to linearly detrended TFP. A white noise process does not capture the dynamics in this series. Instead they used an AR(1). The resulting model performed quite well according to some metrics, so people have resorted to using this type of process by referring to older papers.