Help DSGE-VAR model II

Hi

We are trying to analysis the Chinese monetary policy with the DSGE-VAR Model , we now apply the model in the paper of Del Negro, M., and F. Schorfheide (2004). And the code is from Prof. Stephane. But i have the error in the process, the error is as follows

Improvement on iteration 1000 = NaN


f at the beginning of new iteration, NaN
Predicted improvement: -0.000000000
lambda = 1; f = NaN
Norm of dx 0
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
bad gradient ------------------------
Cliff. Perturbing search direction.
Predicted improvement: -0.000000000
lambda = 1; f = NaN
Norm of dx 0

Improvement on iteration 1001 = NaN
iteration count termination
Objective function at mode: NaN
Objective function at mode: NaN

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

gam 0.500 0.5000 NaN NaN norm 0.2500
pistar 1.000 1.0000 NaN NaN norm 0.5000
rstar 0.500 0.5000 NaN NaN gamm 0.2500
kapa 0.300 0.3000 NaN NaN gamm 0.1500
tau 2.000 2.0000 NaN NaN gamm 0.5000
phi1 1.500 1.5000 NaN NaN gamm 0.2500
phi2 0.125 0.1250 NaN NaN gamm 0.1000
phoR 0.500 0.5000 NaN NaN beta 0.2000
phog 0.800 0.8000 NaN NaN beta 0.1000
phoz 0.300 0.3000 NaN NaN beta 0.1000
dsge_prior_weight 1.000 1.0000 NaN NaN unif 0.5774
standard deviation of shocks
prior mean mode s.d. t-stat prior pstdev

eg 0.875 0.8750 NaN NaN invg 0.4300
ez 0.630 0.6300 NaN NaN invg 0.3230
er 0.251 0.2510 NaN NaN invg 0.1390

Log data density [Laplace approximation] is NaN.

MH: Multiple chains mode.
MH: Old metropolis.log file successfully erased!
MH: Creation of a new metropolis.log file.
MH: Searching for initial values…
MH: I couldn’t get a valid initial value in 100 trials.
MH: You should Reduce mh_init_scale…
MH: Parameter mh_init_scale is equal to 0.400000.
MH: Enter a new value…

I have change the mh_init_scale to 0.3, but it still has the sameproblem

Another question is that how to determine the number of lags of the VAR, should we determine the lags in advance with some tests or the model can

determine the lags itself ?

the attachments are the data and code

Thanks a lot

George.Z Shi.C
frank.mod (2.05 KB)
zhw1.xls (21 KB)

Hi,

The option mh_init_scale won’t change anything here. The problem here is that the optimization of the posterior kernel gives you crazy results (the objective function is NaN!). The posterior mode and the hessian matrix at the posterior mode are used to initialize the mcmc. The mcmc crashes because you do not have a good posterior mode. So you first need to get the optimization right. You may try to change the initial condition of the optimization routine, or change the optimization routine itself.

The number of lags is not determined endogenously. You have to choose this parameter. You may estimate DSGE-VAR models with 1, 2, 3, …, P lags and choose the number of lags that maximize the marginal density of the sample.

Best,
Stéphane.

Thanks a lot, Professor, I have solved the problem in my new topic, and i applied HP filter to deal with the real gdp and nominal interest rate to get the cyclical part as my gdp gap and rate gap, but now i have the following problem.

can you give me some advice on how to change the initial value and to which optimization routin to chose

the attachment is the new code and data

POSTERIOR KERNEL OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the “mode” is not positive definite!
=> posterior variance of the estimated parameters are not positive.
You should try to change the initial values of the parameters using
the estimated_params_init block, or use another optimization routine.

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

kapa 0.300 0.6449 0.0000 0.0000 gamm 0.1500
tau 2.000 2.0780 0.4900 4.2404 gamm 0.5000
phi1 1.500 1.0000 0.0006 1770.9650 gamm 0.2500
phi2 0.125 0.8500 0.0000 0.0000 gamm 0.1000
phoR 0.500 0.2354 0.0001 1841.7337 beta 0.2000
phog 0.800 0.8844 0.0181 48.8823 beta 0.1000
phoz 0.300 0.3388 0.0891 3.8021 beta 0.1000
dsge_prior_weight 1.000 0.4716 0.0413 11.4201 unif 0.5774
standard deviation of shocks
prior mean mode s.d. t-stat prior pstdev

eg 0.875 0.2759 0.0333 8.2776 invg 0.4300
ez 0.630 0.6462 0.1963 3.2910 invg 0.3230
er 0.251 0.2808 0.0608 4.6199 invg 0.1390

Log data density [Laplace approximation] is -165.710075.

??? Error using ==> chol
Matrix must be positive definite.

Error in ==> metropolis_hastings_initialization at 52
d = chol(vv);

Error in ==> random_walk_metropolis_hastings at 43
ix2, ilogpo2, ModelName, MhDirectoryName, fblck, fline, npar, nblck, nruns, NewFile, MAX_nruns, d ] = …

Error in ==> dynare_estimation_1 at 1048
feval(options_.posterior_sampling_method,‘DsgeVarLikelihood’,options_.proposal_distribution,xparam1,invhess,bounds,gend);

Error in ==> dynare_estimation at 62
dynare_estimation_1(var_list,varargin{:});

Error in ==> frank at 140
dynare_estimation(var_list_);

Error in ==> dynare at 132
evalin(‘base’,fname) ;
zhw.xls (20.5 KB)
frank.mod (2.12 KB)

Hi,
try using

in the estimation command, which triggers a Monte Carlo based optimization (see dynare.org/DynareWiki/MonteCarloOptimization). It often helps with non-positive definite posterior covariance matrices.

1 Like

many thanks! I tried and it works!