Hi,

In the file “bvar_forecast.m” it reads in line 145:

it follows that invariably the denominator will be 1!

Would not be correct to employ instead

`size(forecast_data.realized_val, 1)`

?

Many thanks for this clarification!

Best, Jorge

Hi,

In the file “bvar_forecast.m” it reads in line 145:

it follows that invariably the denominator will be 1!

Would not be correct to employ instead

`size(forecast_data.realized_val, 1)`

?

Many thanks for this clarification!

Best, Jorge

Sure, you’re right.

Thanks for reporting this. It will be fixed in Dynare 4.1.1.

Best

Sebastien,

Thank you very much for your quick reply.

I have a second remark though.

In terms of the forecasting ability among BVARS with different lags, my intuition tells me that the farthest you go in the forecast horizon, the largest the RMSE you get (as in any model).

In the example you provided «bvar_and_dsge.mod», I tried different horizons and I allways get the same RMSE figures, i.e. in line 30:

by changing forecast = 8 by 2, 4, 6, etc.

The problem seems to be in “bvar_forecast.m” line 139 :

which yields values incredibly small, which powered to the squared exarcerbate the problem (line 142).

As a result, RMSEs reported are almost identical, so it is hardly impossible to tell which model is better.

Any suggestion on this is highly appreciated.

Best, Jorge

Hi,

I have checked again the forecast code and I think it is fine, even though I cannot completely rule out a bug, at least I don’t see one.

The point is that the out-of-sample forecast returns very rapidly to the steady state (which is a zero value here) as do all VAR forecasts, since this forecast is made under the assumption that no shock occurs.

So this is the reason why the line which you mention gives very small value: only the first values of the out-of-sample forecast should be significantly different from zero.

The reason why you have the same RMSE whatever the lags is probably that you have many dates in your out-of-sample forecast. Since the forecast returns in a few periods to zero, whatever the number of lags, you get almost the same mean error.

To get some difference, you probably need to diminish the size of out-of-sample forecast, i.e. diminish the distance between option “nobs” (number of observations) and the size of your dataset: all the observations of the dataset which are after “nobs” are taken as out-of-sample, used in the RMSE calculation.

Hope this helps,