I had just finished running my RBC model that contains labor, consumption, output and it worked. I used the log-linearized method and I wanted to compare how well the model works compared to the data.

I think that the output that Dynare puts out is in deviations from steady state. So, to transform the data, Would I have to also put them in the form of a deviation from the mean?

I log-linearized the model by hand. This would mean that for the variables (denoted by xhat) I can compare the theoretical moments (via HP filter) with the HP filtered log moments? I get zeros for the _hat variables. How can I compare the data to the model in this case?

Why do you get zeros? For the mean, that is expected. If it happens for the standard deviations, there is something wrong/odd.
To compare model variables in percentage deviations to the data, you need to obtain empirical data in percentage deviations from a trend. The easiest way is using the theoretical HP filter on the loglinearized model variables and comparing them to HP-filtered logged empircal data (see Pfeifer(2013): â€śA Guide to Specifying Observation Equations for the Estimation of DSGE Modelsâ€ť) for more details.

I also also get zeros because the model is linear and thus the model variables in Dynare is the percentage deviation from the steady state. I can use the â€śhp_filterâ€ť in â€śstoch_simulâ€ť command and get theoretical moments. As for the data, do I just need to take log and then hp_filter the logged data? Do I need to demean the empirical data?

Let me make it clear. For a linear model (use the â€śmodel(linear)â€ť command in dynare), I just need to use the â€śhp_filterâ€ť in the â€śstoch_simulâ€ť command to get theoretical moments. As for the data, I just need to take log first and then hp_filter the data series(of course, take seasonal adjustment in advance as in your reference paper). Am I right?

I have to repeat myself. Treat data and model equally. If your model is in log deviations from steady state (loglinear) and you use the hp_filter option, you have HP filtered log deviations from steady state. Now, you can compare this to HP-filtered logged empirical data, because the latter will then also be in log deviations from the trend/steady state.

If one wished to compare non-stationary data in levels with data from a log-linearised DSGE model would he have to actually normalise the model by the non-stationary process (a_t/a_t-1, say) or not?

Moreover, would there not be a mismatch between, say, the effective IRFs in levels and the theoretical log-linearised ones?

I ask so for Barsky and Sims, in â€śInformation, animal spirits and the meaning of innovations in consumer confidenceâ€ť, seem to have operated this comparison.

The standard convention is to compare moments from the data and from the model based on the same transformation, i.e. to for example use first differences for both.
In contrast, for IRFs it is common to compare VAR and model IRFs directly (also called the Cogley-Nason-Sims approach). There is a competing approach that demands estimating the VAR on model generated data. For details, see the attached slide from my Master lecture. The reference is Christiano, Lawrence J., Martin Eichenbaum, and Robert Vigfusson
(2006). â€śAssessing structural VARsâ€ť. NBER Macroeconomics Annual. Ed. by Daron Acemoglu, Kenneth RogoË™, and Michael Woodford. National Bureau of Economic Research. Chap. 1, 1â€“106. Comparing_Model_and_VAR_IRFs.pdf (108 KB)

So besides comparing theoretical IRFs and empirical IRFs (traditional approach) one may compare theoretical IRFs and ones from model simulated data (other approach)? Is that correct?

In any event, would comparing theoretical IRFs with empirical ones obtained from data in levels not create an appraisal mismatch when the model is log-linearised? Is it warranted?

Also, if one wishes to gauge non-stationary data, hence without abstracting from the BGP, should he normalise his nominal variables in the corresponding DSGE model or not?

I do not understand your questions. As shown in Sims Stock Watson 1992 - Inference in linear time series models with some unit roots, it does typically not matter much in VARs whether they are in levels or first differences. What you need to take care of is that your data is log levels if you want to compare it to percentage deviations. If your shock identifcation is correct, the IRFs from a VAR in log-levels will provide correct IRFs for deviations from the long-run trend.

Say you have some non-stationary real data and you do not HP filter it. You run an SVAR in levels and compute IRFs. Say you want to reproduce these IRFs in a DSGE model. You then model technology with a stochastic drift (unit root+long-run trend+exogenous shock).

Should you normalise the nominal variables to able to properly compare the model IRFs (once the model is solved) with the real data ones or not? It appears to me that you should not, because the real data was kept non-stationary; is that not the case?

Is it appropriate to run the real data SVAR in levels rather than in log-levels, since one intends to compare it to a DSGE model which is in percentage deviations (as explained)? Papers seem to have done this. What do you think?

I still do not understand what your question is. How do you jump from nominal to real variables? Also, you seem to be confusing stationary IRFs around a long-run nonstationary trend with the nonstationary trend movement itself.

No, that is not appropriate and I have never actually seen people run VARs in levels. They all use log-levels (even when they often refer to it colloquially as â€ślevelsâ€ť).