I am new in Dynare world and I have found it very interesting.

However I am trying to run a recursive forecasting exercise with vintage (real-time) data set, which means that my data set is diagonal. Let me be more specific, I have 10 observables. So the first 10 columns contain 110 obs, the next 10 columns contain 111 obs, the next 10 columns contain 112 obs etc, so every 10 columns correspond to different vintage date. So my task is to take the first 10 columns (with 110 obs) estimate it and generate forecasts for 1,2,3 and 4 periods ahead. Then take the next 10 columns (with 111 obs) do the same task, etc until data set exhausts (26*10 columns). So how can we do that in Dynare ?? I see that estimation command works for data sets not like mine.

The estimation command I use is: estimation(nograph, datafile = DSGE, xls_sheet=DSGE_vintage, xls_range=B4:JA138, prefilter = 1, mode_compute = 4, mh_nblocks = 0, mh_replic = 20000, mh_drop = 0.3, mh_jscale = 0.35, first_obs = ???, nobs = ???, forecast = 4);

Unfortunately, there is no automated way of doing this with Dynare (yet). You would have to work with a sequence of estimation commands calling different datafiles for now.

If I forecast (horizon=4) only one variable with the estimation command (not recursively, but using the entire dataset and forecasting once), dynare returns me a 5x1 matrix (for both oo_.MeanForecast.Mean.d and oo_.PointForecast.Mean.d) , for which I have realized that the first row is the (last) observation at time t and forecasts for t+1, t+2, t+3 and t+4 follow in row 2, 3, 4, and 5 respectively. Is this matrix’s content and formulation correct ??

In the estimation command I have set the prefilter option (=1) for demeaning the data before the estimation. So the forecasts that Dynare returns (in the above matrices) are demeaned aswell?? If not, how can I compare them with the actual data then (for computing the forecasts errors) ?? Do I have to add the sample mean back to the forecasts and then compare? Is that automatic in dynare?

When I click on the ‘options’ at the end of the dynare run I got the following error in Matlab:
Exception in thread “AWT-EventQueue-0” java.lang.NullPointerException
at com.mathworks.mlwidgets.workspace.ClassicWhosInformation.(ClassicWhosInformation.java:20)
at com.mathworks.mlwidgets.workspace.ClassicWhosInformation.getInstance(ClassicWhosInformation.java:38)
at com.mathworks.mlwidgets.workspace.WhosRecordlistModel.setWhosInformation(WhosRecordlistModel.java:45)
at com.mathworks.mlwidgets.array.editors.MatlabWorkspaceLikeModel.setSuperWhosInformation(MatlabWorkspaceLikeModel.java:379)
at com.mathworks.mlwidgets.array.editors.MatlabWorkspaceLikeModel.access$100(MatlabWorkspaceLikeModel.java:35)
at com.mathworks.mlwidgets.array.editors.MatlabWorkspaceLikeModel$1.run(MatlabWorkspaceLikeModel.java:392)

etc......Any idea what is that??

In a forecasting race (with DSGE, BVAR, TVP-BVAR and other models) which forecasts (from the DSGE) would you use for comparing them with the rest models?? The PointForecasts or the MeanForecasts ?? I know the difference about the future shocks uncertainty. I am just not sure which one is better to use.

According to your above suggestion I am using a sequence of estimation commands calling different datafiles each time. But I have the impression that dynare keeps the same number of observations although my datafile should grow by one observation point in the second estimation command (as you can see in the xls_range). So I use:

estimation(nograph,prefilter = 1,datafile = DSGE, xls_sheet=DSGE_vintage_Dynare, xls_range=L3:U114, mode_compute = 4, mh_nblocks = 2, mh_replic = 150, mh_drop = 0.35, mh_jscale = 0.35, forecast = 4) d; [/code]
but what I see is that for both estimations it uses the same number of observations (110) while it should use 110 obs for the first estimation and 111 for the second one. Is there anything wrong in my commands? Where can I find in my workspace what dynare reads as datafile ??

This is correct for 4.4.3. In the current unstable version, to be released as 4.5 by May 2016 (hoperfully), you should only get the forecasts in 4 by 1 matrix.

Please try the unstable version, which should report them including the mean.

That is a Java problem with Matlab. There seems to be nothing one can do about.

I would use the same type of forecasts you use in the other models

I would explicitly specify and increasing

in the respective commands to avoid a previous setting still being there.

Why do you mean with “direct one”? How would that look like. Please clarify what your understanding oft the term is so we are sure to talk about the same objects.

As far as I know,
Iterated: If models is yt= c + By(t-1) then forecasts is y(t+h) = cB^(h-1) + ytB^h but for
Direct, the estimated model is y(t+h) = c + Byt and using directly the available info (yt) you forecast y(t+h).

Also in Marcellino et al (2004) : “Iterated” multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas “direct” forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted.

I see. I may be wrong, but for linear state space model of the type we are considering here, the two concepts seem to be identical. You could define a h-period ahead forecast of variables and estimate it directly within the model, but it will use the same forward iteration of the state-space model as the iterated forecast.

One more question Professor. Does the unstable version of 2/Apr/2016 report the recursive forecasts including their mean?? Because I demean my data before estimations and I need to add them back to forecasts for calculating the forecast errors.

I am trying to do some forecasts using (real-time) vintage data following Gali and Monacelli (2005), Justiniano and Preston (2010) and Alpanda et al. (2011) (and priors from Smets and Wouters). Hence I work with a sequence of estimation commands (along with the forecasts) calling different datafiles each time since there is no automated way to call different vintage in every recursion. I use the mode_compute=6 (I get an error with chol when I use Sim’s optimizer) with 300.000 iterations for all the commands.

My problem is that for some estimations the convergence looks ‘just OK’, but for the majority of them both blue and red lines at completely flat and a at distance between !!! How is that possible?? I mean, how the convergence looks ok in the first, lets say, estimation and then when an extra data point (from the next vintage) enters into the sample and the model is re-estimated with the next estimation command the convergence becomes completely flat ???

I have attached you the .mod file and the dataset. Could you please have a look at them? I cant provide you the log file yet because program is still running, and it will for couple of days .

The problem most probably derives from the jumping covariance matrix. In every step you recompute the mode from scratch. It might be better to recursively load the last mode-file and then do mode_compute. Did you check your first mode makes sense?

I see, So I will find the mode with my first vintage dataset and then I will load it for the rest sequential estimation commands. But I have a question. How can I judge If the first mode makes sense? How do I see that ??

The problem is that my code is running right now without the mode_check included in the estimation command. Also I havent included the trace_plot in my code. Is there any way to generate those graphs after my code finishes or I should include these commands and re-run it???

Both can be generated after an estimation run. The problem will be that your repeated calls to estimation will overwrite previous results. Therefore you will only have access to the last estimation command.

I have stopped the repeated estimations because each time i was computed the mode from the scratch (and as you suggested I will load the mode_file of the first estimation). So right now I run my code doing only the first estimation (using the first vintage data set), so I make sure that the mode is computed well. And if everything looks fine I will run the code with the repeated estimations loading the mode from the first estimation each time.

So could you please tell me how can I generate both graph after my estimation ends?