Model Evaluation after MLE

Good morning to all,

I wanted know if there is any other way of evaluating the goodness of fit of a model (for example in linear regression one can use the Adjusted R2 or an information criteria) aside from second moment comparison. I have simulated data with a simple RBC model and I would like to know if after maximum likelihood estimation using consumption as the observable variable is there any way to assess that the “closest” model to the DGP is the RBC and not the Solow model. I attached the files with the codes for the simulation, and the estimated RBC and Solow models, along with the simulated dataset. I was thinking about using the reported “Final value of minus the log posterior (or likelihood)” as a measure but I am not sure if higher values indicate a better fit or the contrary (as with the RBC model this value is negative and for the Solow model it is positive).

rbc_data.xlsx (70.6 KB)

RBC_mod_dgp.mod (772 Bytes)

RBC_mod.mod (702 Bytes)

Solow_mod.mod (643 Bytes)

Thank you very much,

If the models are nested, you can use likelihood ratio tests. Otherwise, you may need to conduct Bayesian model comparison.

Thank you very much for your reply. How should I define the 2 models for Bayesian model comparison, in the same mod file or in 2 different ones, could you please give me a small example code?

Ideally, you use two different mod-files. tests/estimation/fs2000_model_comparison.mod · master · Dynare / dynare · GitLab
is an example.

Thank you very much for your reply.