How to see whether the model fit the real data well?

Dear all

After estimate the model using bayesian method, how should I judge whether the estimation result fit the real data well?
I know in the estimation command, we can calculate the 1-step ahead forecast filtered variable, so is it ok to compare the forecast filtered variable with the real data and see whether the real data locate in the confidence intervals?

Besides, is there any method to compare the standard deviation between the model simulation data and the real data? Can we calculate the hpd intervals for the standard deviation of simulation data and see whether the standard deviation of real data locate in the hpd intervals?

Really looking for your kind help! Thank you so much.

Full information estimation works essentially via minimizing the one-step ahead forecast errors. So checking the forecasting performance within the estimation sample does not make sense.

Comparing the standard deviations of simulated and actual data in contrast is a valid and worthwhile exercise, because the ML estimator will weight all moments according to their precision (and there are covariances at all leads and lags) while economists care particularly about a few select second moments.

Thanks for your reply. I tried to do the simulation and calculate the standard deviation of the simulated endogenous variables. However, I don’t know whether my method is right. Can you help me to have a look at the code below?

shocks;
var e_ph_os; stderr 0.3821;
var e_h_os; stderr 0.0181;
var e_r_os; stderr 0.0115;
var e_c_os; stderr 0.0084;
var e_iv_os; stderr 0.0078;

end;
stoch_simul(periods=1000000,nograph) y;

Using this command, I get 1000000 periods simulation values of y. Then I calculated the standard deviation for every 100 period. Finally I get 10000 standard deviation values. So I can get the 5% quantile and 95% quantile.

As a statistical test, that might work. However, most of the literature does more of an eyeballing comparison.

So ‘improving the fit of the model’ (as sometimes used in research papers) is kind of a technical term in DSGE modeling? Meaning improving moment matching? Or maybe the meaning depends on the context?

For example, Del Negro says adding habit persistence improves the fit of the model, but then he checks that using marginal likelihood plots.

Is it the case that higher marginal likelihoods (which improves the fit of the model) necessarily causes the moments of the simulated series to also be closer to moments of actual data (another measure of improved model fit)?

Or we should distinguish fit of the DSGE model (using say marginal likelihoods and marginal data densities) from fit of the DSGE model to data(i.e., comparing moments)?

You are confusing fit in the context of full information and limited information estimation techniques. They imply a different weighting of moments. See above:

The marginal data density will take the moments into account, but in a not obvious way. Also, you may not care about the moments that the MDD assigns the highest weight.

1 Like

So if I understand, improving the fit of the model (in the sense of adding and deleting model features to increase MDD) is kind of a different exercise from improving the fit of the model (in the sense of comparing moments). I guess one needs to improve MDD first, and if satisfied, then try to match moments afterward?

Let me also ask though how researchers typically build estimated models? Like, you first build a baseline model with all essential features, and then try to add/remove non-essential model features to improve MDD? And after that, try to match selected moments.

You still did not grasp the main point: the MDD will incorporate the fit of ALL moments, weighted by the precision they are estimated with. That may imply that the 200th autocorrelation between output and consumption is really well-matched. While that is the most efficient way of estimating parameters, it is often not what researchers care about. We often only care much about the first two to four autocorrelations and cross correlation.

Regarding estimated models: you either do that via moment matching or full information methods. But you pretty much never mix the two. When selecting features, i.e. doing model comparison, the MDD is the way to go.
The MDD penalizes for complexity. A pure moment matching could always achieve a perfect fit by just adding enough features and therefore parameters.

1 Like