Standard deviation of edogenous variables and empirical std

Dear all,

my question is about how to calibrate a parameter to have the standard deviation of an endogenous variable matching the empirical standard deviation.
More precisely, I am replicating the model by Jermann and Quadrini (macroeconomic effects of financial shocks). They calibrate a parameter to obtain the standard deviation of equity payouts in the model equal to the empirical std.

In their presentation of stylized facts, the empirical std is calculated with filtered data (Baxter and King filter). Looking at their (Gauss) code, it seems that the moment to match for calibration is calculated on actual data linearly detrended.
Also I don’t know which model-implied std I have to use: theoretical moment or simulated moment? Filtered or not?
So my question is on whether there are some “rules” about which moments (theoretical and empirical) we have to compare.
Let me know if I need to be clearer.

Thank you in advance

Rudy

Part of the answer is here [Difference between different types of moments)

There are no general rules as to whether user theoretical or simulated moments. I prefer theoretical ones. Regarding the filtering: there are also no general guidelines. Best practice often seems to be to treat your model and data variables equally. That is, if you apply a Baxter-King filter to the data, do so as well for the model variables. In this case, as there are no theoretical moments available for the Baxter-King filter, you need to use simulated series.