Hi all,
For the AR(1) process of shocks in DSGE model, should we put also the constant term into the AR(1) equation, for example:
lnA = (1 - rhoA) + rhoA* lnA(-1) + stdeA*eA (1)

Or it is fine to just use: lnA = rhoA* lnA(-1) + stdeA*eA (2)

The paper I am trying to replicate using the 1st way (with constant term) but I see almost everyone use the 2nd one (without constant term). So I wonder could I just ignore the constant term to proceed, since it more popular and more convenient?

The two models are different. In the first case the unconditional expectation of \log A_t is equal to one (which is weird because it means also that the deterministic steady state of A is the Euler numbere, while in the second equation the unconditional expectation is 0 (ie the deterministic steady state of A is one). If you ignore the constant, you will change the steady state of the model.

Hi Stephane,
Thanks for your reply. I just checked. So the original model is in level form, not log form,
which is:
A = (1 - rhoA) + rhoA* A(-1) + stdeA*eA (1)
So I will have steady-state A = 1, the same as the above 2nd equation.

Variables A will then have the same deterministic steady state in both equation, still the unconditional expectation and variance of A will be different. Also by construction A is necessarily positive in the second equation but not with the first equation. This is not a problem if you do a first order approximation of the model, but could be an issue if you simulate the model with the extended path approach.

Let me stress the last point by @stepan-a. If your model is solved at first order, the linear level version A_t = (1 - \rho_A) + \rho_A A_{t-1} + \sigma_A \varepsilon_A
and the log-normal version \log A_t = \rho_A \log A_{t-1} + \sigma_A \varepsilon_A
are equivalent. At higher order or for nonlinear solution approaches, there will be differences due to Jensen’s Inequality.

Hi, Parsimony per se would suggest to use white noises instead of AR(1) processes. But we know that this would deteriorate the fit of the model. We need additional persistence in the exogenous variables to match the persistence in the data. So it is a matter of parsimony and fit. Note that it is not granted that AR(1) processes will do the job. You could estimate a collection of models with different ARMA(p,q) specifications for the exogenous variables, and select the model (hence the specification of the statistical model for the exogenous variables) that maximizes the marginal density of the sample. This Bayesian measure used for model comparisons, selects the best model in terms of fit by favouring parsimonious specifications.

@stepan-a’s answer refers to how you should ideally proceed from a model fit perspective. My answer referred to the path dependence in the literature, i.e. how it has been done and continues to be done. The early RBC papers fitted a stochastic process to linearly detrended TFP. A white noise process does not capture the dynamics in this series. Instead they used an AR(1). The resulting model performed quite well according to some metrics, so people have resorted to using this type of process by referring to older papers.

I have question regarding writing exogenous AR(1) process in non linear model that will be solved in a 2nd order.

For example, in my model I have variable ‘tau’ that denotes cost of approaching bank. Tau follows an AR(1) process and the steady state value of this variable (tau_ss) is 0.02.

I want to compute welfare regarding the impact of 1% shock of this variable, and use 2nd order approximation in dynare.

I confuse on how to write the process correctly in nonlinear model and what value should I put in the shock size.

Which one (if any) among these alternatives are correct:

tau=tau_ss^(1-rho) * tau(t-1)^rho * eps_tau

var tau; stderr 0.0002
(note : 0.0002 is obtained from 1% x steady state value of tau)

Option 1 is wrong, because multiplicative processes require e^{\varepsilon_t}. With that change, option 3 is equivalent to 1. Option 2 and 3 differ in whether \varepsilon_\tau measures the absolute or the percentage deviation of \tau from its steady state.

Thank you very much for your help.
I want to confirm my understanding: so I can use option 3, use 2nd order approximation, and interpret the size of shock as 1 percentage deviation of tau from steady state? Am I right?

May I ask when doing the simulation, should I use the level or log form of the AR(1) process for productivity shock? Is there any requirement for this?
Thanks so much in advance!

Thanks so much for your prompt reply. I am sorry for making a mistake up there, what I want to say is not for simulation, it is for optimization.

I am studying the influence of volatility of innovations in that AR(1) process on various firm optimal policies, since I need to take exponential of value of shock after generating possible values of shock by using Tauchen (1986) method, I think I do need to care about the second order. Because after taking the exponential, that would be like a convex function with limited downside loss but higher upside potential which could affect firms’ policies.

So what I am trying to express is if I care about the second order, and I personally prefer the level form because of the linearity, are there any factors that might stop me to do so?

Purely in conceptual terms, productivity cannot become negative. So the level specification has this particular downside. If you worry more about the skewness, then you can use the level specification. That type of tradeoff will always be present.