I am trying to compare two models which are estimated using different datasets (different in the sense that one of the models use an extra observable data series which is not included in the other model. Example: model1 is estimated on observable series Y -plus some othe obs- while model2 estimated using observable x and z where x+z = y - and the rest of observables are the same)
I am interested in some model fit tests and read about it the only relevant post. (link below)
My questions are :
Can one run these posterior predictive checks manually by using any of the ‘’ forecast ‘’ commands of dynare ?
Does the new unstable version include any routines that allow to run any posterior predictive checks ?
Honestly I am not quite sure I am right.
What I have done is:
I have estimated.
I have compared data 2-nd moments to second moments from oo_.var (are these moments based on mode or posterior mean ?)
What I was thinking is based on what I read from a paper saying:
It seams the author is making a forecast of endogenous variables and calculating their CI as well drawing simulations.
Initially I thought I could do that with SMOOTHED variables but they turn to be exactly the same like the observed variables .#
From the documentation of dynare none of forecasts commands includes a ‘‘burn-in’’ option.
Is there a way to do a similar exercise pls?
% set the parameters draws to the model structure
M_ = set_all_parameters(xparam1,estim_params_,M_);
% compute the steady state for the parameter draw written to M_
[ys, oo_] = simult(oo_.dr.ys,oo_.dr,M_,options_,oo_);
%set second part of output cell
where you have to adjust
plus the number of data points you want to simulate. Moreover, you need to adjust the statistic you want to read out. In the example, it is the mean.
Yes, you got that right. And no, you don’t need a stoch_simul command here, because I extracted the relevant lines of code doing the simulation and moment computation. Please be aware that this piece of code will not work if you use the
Many thanks for your contribution Professor.
I am not quite sure I have done it right though I basically COPIED the command lines you mentioned INSIDE the file ‘’ posterior_function_demo.m ‘’ ( ‘‘posterior_function_demo.m’’ file ATTACHED)
I gave it a try by loading around 160,000 MH replications that I had done before and now triggering the ‘‘posterior_function_demo.m’’ file
by the command
which comes AFTER the ''estimation(…) ‘’ command.
Although I get some CI (confidence intervals) I do NOT get any simulated endogenous variables.
Also I get the following error:
It works for me as well now. I think it required that I run the Mh-replications from scratch rather than download existing ones as I did yesterday.
Regarding the output. I am trying to understand the output I get in
I am getting 500 cells each with 9 values inside (Probably this means 500 draws (times) the 9 observables I specified in
Is my reading of results right pls ?
If so, What kind of interpretation can I make ? Does it mean that I can draw the confidence intervals myself based on these 500 draws for each observable (so I get their average -mean- and Confidence intervals ) ?
Don’t I get simulated variables as specified in the option ?
DO I get a Confidence Interval ?
I am struggling to understand : what is the output I am getting ?
Earlier you had helped with the file ‘’ posterior_function_demo.m ''
to simulate the endogenous variables with the estimated parameters.
It was quite helpful as I could use it finally. thank you for that.
Is there a way to obtain the second moments of HP-FILTERED variables instead ? by using the option:
inside the ‘’ posterior_function_demo.m ‘’ file. It did not work in when I tried !!?? .
Hi Professor Pfeifer !
As a replacement to my last post I have a slightly different thought!
As an update have been thinking of using the ''posterior_demo_function.m ‘’ by combining the posterior distribution from two differnt estimations but
(with my not-so-good knowledge) I have not come up with the solution.
What is in my mind is that I use the draw a sample of i.e.100 simulated series but I want the the posterior for some (one or two) parameters to come from a different estimation
rather than from my baseline estimation . let’s say a different ‘’ M_ = set_all_parameters(xparam1,estim_params_,M_); ‘’ from another estimation
For example I like the parameters ‘‘alpha’’ and ‘‘beta’’ to come from a different estimation_1. ‘’ M_ = set_all_parameters(xparam1,estim_params_,M_); ‘’ so that I overwrite
the mean, mode and credible interval that I have in my baseline estimation_0 for these two parameters .
Is it possible by manipulating the ‘’ M_ = set_all_parameters(xparam1,estim_params_,M_); ‘’ ?
That is not easily possible. The way to go here would be to use the posterior_sampler to return you the posterior parameters from both estimations. Then you need to write your own function looping over these parameters, combining them and then loop over setting them and running the desired command to compute the posterior objects.
Many thanks for your patience Professor !
Following your post, I checked the ‘‘posterior_sampler.m’’ function . As you said it is involving.
It is a good excercise though.
When /and/ if you get some free time may I asK:
what would be the ‘‘TargetFun’’ and ‘‘ProposalFun’’ in this case (from the list below of the inputs of the ‘‘posterior_sampler’’ ).
I am trying to se how I can do the first step only, i,.e (recall the posterior parameters from both estimations)
Apologies for the late reply Professor. I have not been around these days.
Before I try looping over them (I have to go over details of the link you gave) - I just want to make clear to myself.
When I draw a sample of (let’s say) 1000 draws using the
(after the estimation command), I basically get 1000 simulated ariables Y,C, etc.
Now my question is am I drawing these draws over the distribution of the estimated parameters or over the distribution of the shocks ?
(I wonder if the question if formulated right !!)
(To mke myself clearer) how does the ‘‘sampler’’ obtain the the 1000 different simulated Y’s that I get, by '‘drawing’'
over the distribution of the respective parameter (computed with the MH )?
or over the distribution of the shocks (shock distribution is again computed via MH in this case unlike in a calibrated model ) ???
or over both these (as both the parameters and the size of shocks are considered parameters during the estimation and their distribution is computed via the the MCMH-replic procedure ) ?
ps. I would assume the last one ! Is that right pls ?