Sorry for the stupid question. When we solve our model in dynare, is the linearisation of a first order Taylor approximation the same as we would log-linearise the model by hand?

My question is based on Johannes’ reply in this post, where Johannes says:

But the model you are referring to, the Smets/Wouters 2007 model, was linearized, i.e. a first order Taylor approximation was performed.

As far as I remember is the SW model log-linearised.

No it’s not the same. By default Dynare linearize the model, if you need a log-linearization you have to use the loglinear option. In this case Dynare linearizes the transformed model where all variable x is replaced by e^x. A log linearization is a linearization on a transformed model. Other transformations (as Box-Cox) could be considered and may reduce the accuracy errors (see the paper by Kenneth Judd and Fernandez-Villaverde and Rubio-Ramirez).

Thanks Stéphane for getting back to me and clearing that up. What you said is in line with Johannes’ guide to specify obs. equations. I think I just got confused by the comment and was mixing things up.

@stepan-a’s answer relates to the distinction between linearization (absolute deviations) and log-linearization (percentage deviations). Dynare by default only does linearization. To get a loglinearized version, you can either

However, for my point in Solving a DSGE Model this does not matter. In both cases the model would be linear (either in logs or levels) and second order derivatives would be 0.

The advantage of the explicit use of the exp() function, or the third approach mentioned by @jpfeifer, compared to the loglinear option is that you are not forced to log-linearize the whole model, which is not possible if the steady state of some variables is non positive (strictly). The third approach is not related to a potential reduction of the accuracy errors, and only deals with the measurement of the variables (if the observed variables are in logs you need this transformation to be consistent with the data).

Thanks Stéphane and Johannes for your detailed answers. Just one last question on this topic; lets say I specify the following endog. variable in the nonlinear model

GDP = 100* ( (GDP_level - GDP_SS)/GDP_SS ),

where (GDP_SS = steady state of GDP)

for which I later want to the see the IRF. Shouldn’t be the percentage deviation of this approach be the same as under log-linearisation? Let me know if this wasn’t clear.

Maybe a follow-up question. You wrote in the last reply that, in terms of solving a DSGE model, using linearized or log-linearized models does not matter. Then, does this matter for estimating a DSGE model?

In fact, I would like to estimate a DSGE model from others using my own data. The “loglinear” option was in the original estimation command. But if I want to estimate the model using my data, the following error poped up:

Error using print_info (line 98)
The loglinearization of the model cannot be performed, because the steady state is not strictly positive.

After removing the “loglinear” option the estimation worked well. However, do those estimation results (like posterior means, posterior IRFs, forecasts, etc.) depend crucially on whether or not including the “loglinear” option? (I took 2 mil draws and kept the last 50% draws).

It matters whether you consider a variable in terms of levels or logs, i.e. in percentages. But most of the time, we only care about a subset of variables in percentages. So there is no point in logging the whole model. Simply append the logs of the variables you care about as auxiliary equations.

Note that you will make a mistake if you are matching logged data to unlogged variables in the model. In that case your observation equation will be wrong.

Thank you @jpfeifer. It is however a non-trivial task to compare the mod files and the data. I will try to estimate the model first following your instructions, and get back to you if there is any problem, thanks!

It’s unclear what you are doing. In your model, c is log consumption. If you observe log differences multiplied by 100 dc in the data, then in the model