Statistical moments from data vs from Dynare

Hello everyone,

I’m currently writing my Bachelorthesis on evaluating the Neoclassical business cycle model for the countries of Spain and the UK.
My main concerns are that when comparing the statistical moments from the data of these two countries with the statistical moments from the outputs from Dynare, I get as if the Neoclassical model is doing a terrific job on fitting real data. Particularly, the statistical moments analyzed are Standard Deviation, Correlation and First order autoregression for capturing the Business Cycle. These statistics are computed on the variables after having been detrended through the HP filter. The HP-filter and the statistics for these two countries are computed in Excel whereas the same statistical moments are computed by Dynare once I run the model.

I’d really appreciate any explanation on why I may get these divergent results of having such low statistical moments on Dynare compared to the data. Does it have to do with some option I’d need to add in my Dynare script? Did I compute my statistical moments in Excel wrong? Or does it have to do that the Neoclassical model just does a bad job on fitting real data for these two countries (based on all the papers I read, it did an okey job at least for the US, that’s why I was wondering…).

I attach my Dynare script and the external Matlab Steady State solver function.
I’d have liked to attach the Excel file with the statistical moments for Spain and UK and their corresponding statistics I get from Dynare. Also a summarized version of the empirical moments (from data) vs theoretical/empirical moments (from Dynare), but I’m not allowed to attach these kind of files here in the forum.
Hence, in case anyone wants to help I leave here my personal e-mail such that I can send you these documents when you contact me:

PD: Very interesingly, when I do specify the HP filter option on my Dynare script in order to be able to compare empirical moments, I get worse results than if I don’t specify the HP command. This means that the theoretical moments from Dynare (No HP command) fit better actual data (HP filtered) than empirical moments from Dynare (HP command).

Thanks in advance for any kind of help or advise.
Neoclassical_Model_UK_steadystate.m (1.61 KB)
Neoclassical_Model_UK.mod (663 Bytes)
Neoclassical_Model_SP_steadystate.m (1.62 KB)
Neoclassical_Model_SP.mod (663 Bytes)

You are supposed to compare HP-filtered logged data from the model to the same statistics in the data. In your model, you do not consider logged data and do not HP-filter them. It should be

[code]// 1) Definition of variables

var y c k invest l y_l z log_y log_c log_invest;
varexo e;
parameters alppha betta delta psii rho sigma;

// 2) Calibration

alppha = 0.38234;
betta = 0.97326;
delta = 0.04388;
psii = 3.28640;
rho = 0.975508;
sigma = 0.00436629;

// 3) Model

(1/c) = betta*(1/c(+1))(1+alppha(k^(alppha-1))(exp(z(+1))l(+1))^(1-alppha)-delta);
c/(1-l) = (1-alppha)
c+invest = y;
y = (k(-1)^alppha)*(exp(z)*l)^(1-alppha);
invest = k-(1-delta)k(-1);
y_l = y/l;
z = rho

l = (1 - alppha)(1/betta - (1 - delta))/(psii(1/betta - (1 - delta) - alpphadelta) + (1 - alppha)(1/betta - (1 - delta)));
k = ((alppha/(1/betta - (1 - delta)))^(1/(1 - alppha)))l;
invest = delta
y = k^alppha*l^(1-alppha);
c = y - invest;
y_l = y/l;
z = 0;

// 4) Computation

var e = sigma^2;


2. Your parameterization will not allow for big effects of TFP shocks.
a) Your technology estimate has a relatively low standard deviation
b) you model a labor augmenting technology shock, not TFP. This decreases the effect even more due to an exponent smaller than 1
c) your depreciation rate is high and beta is low.

A calibrated model for the US can be found at There the fit is better

Dear Johannes Pfeifer,

thanks a lot for the clarification and advice. Adding the HP filter with frequency 1600 did improve my model as well as specifying the logs on my endogenous variables (as I did have in the statistical moments from data) did also improve the fit.
Regarding point 2. I have one question.
I’ve seen that in Dynare’s user guide: Chapter 3 “Solving DSGE models - basics”, sigma (depicted as the Standard Deviation of the error term in the technology AR(1) process) is calibrated as: sigma= 0.007/(1-alpha). This makes sigma be around 0.01. Hence, I may want to calibrate sigma in the same way in order to increase my value of sigma and so my shock (as suggested by you in 2. a.) ).
Nonetheless, there is no place in this chapter where they explain where 0.007 comes from…? In other words, I don’t see the connection of the STD of the error term being equal to a number (0.007) divided to (1-alpha). Where can I find an explanaition for this value or from which relation can I pin it down?

Here is the link to the user guide I’m refering to (particularly the script in page 31): … a.pdf/view

Thanks again for your advise and hopefully you can give me a hand with this last question.

Best whises,

When you construct a logged TFP series for the US and use a linear trend, the fluctuation around the trend is roughly 0.7% (I have 0.66% estimated for the sample in my mod-file I linked to above).

Perfect! Thanks a lot once again for your time and dedication!