Errors in Bayesian Estimation

When I do the Bayesian Estimation,

estimation(datafile=us_data,xls_sheet=delta,mh_drop=0.2,mh_replic=5000,mh_jscale=0.4);

errors happened as follows:

?? Error using ==> chol
Matrix must be positive definite.
Error in ==> metropolis_hastings_initialization (line 68)
d = chol(vv);
Error in ==> random_walk_metropolis_hastings (line 63)
metropolis_hastings_initialization(TargetFun, xparam1, vv,
    mh_bounds,dataset_,options_,M_,estim_params_,bayestopt_,oo_);
Error in ==> dynare_estimation_1  (line 782)
  feval(options_.posterior_sampling_method,objective_function,options_.proposal_distribution,xparam1,invhess,bounds,dataset_,options_,M_,estim_params_,bayestopt_,oo_);
Error in ==> dynare_estimation  (line 89)
dynare_estimation_1(var_list,dname);

Error in ==>  main201707 (line 1370)
dynare_estimation(var_list_);

Error in ==> dynare  (line 180)
evalin('base',fname) ;[/quote]

Then I added mode_compute=6 into it and ran it. I waited for a long time (over 40min) but the estimation did not terminate even though I just ran mh_replic=500 or mh_replic=2000 (Note: realized that Dynare guide book suggests that mh_replic should be larger than 1200). So I gave up.

Could anyone please tell me how to solve this problem or which existing posts I can refer to? Thank you.

Hi,

mode_compute=6 is very long optimization routine (based on a metropolis, it’s documented on the wiki), so I am not surprised by the timing you report. I don’t understand your problem. Did you crash the optimization (mode_compute=6) before it exits normally? In this case the value of mh_replic is not pertinent (i.e. it won’t affect the time spent in the optimization). You should read the section devoted to the optimization algorithms if you want to tune the parameters (see ncov-mh and scale-mh options here).

Best,
Stéphane.

Did you check whether the mode_check plots look ok? That is always the first diagnostic step.

Thanks so much for your always kind reply. I did mode_check now but there is no green (log-like kernel) or blue line (log-post) in graph of my attached file and the value is surprisingly large. I read “An Introduction to Graphs in Dynare” and it sounds that there is something wrong with my prior distribution assumption. How shall I deal with it? My assumption about prior dist is as follows:

......
/E3  neutral Tech.
muzR=(1-rhomuz)*STEADY_STATE(muzR)+rhomuz*muzR(-1)+muz_eps_s/100+muz_eps_n4(-4)/100+muz_eps_n8(-8)/100;

//E4 preference shock
zetaR=(1-rhozeta)*STEADY_STATE(zetaR)+rhozeta*zetaR(-1)+zeta_eps_s/100+zeta_eps_n4(-4)/100+zeta_eps_n8(-8)/100;

.................

//E8 unemployment benefits shock
DR=(1-rhoD)*STEADY_STATE(DR)+rhoD*DR(-1)+D_eps_s/100+D_eps_n4(-4)/100+D_eps_n8(-8)/100;

//////////////////////////////
//measurment equations////////
/////////////////////////////

[name='Observation equation output']
dy=yR-yR(-1)+muR+y_eps_me/100;
[name='Observation equation consumption']
dc=cR-cR(-1)+muR;
[name='Observation equation investment']
di=iR-iR(-1)+muR+upsilonR-upsilonR(-1);

.......

[name='Observation equation capital utilization']
duk=ukR-ukR(-1);
End;

estimated_params;
alfa,0.2366,,,BETA_PDF,0.33,0.2; % capital share
sigma,0.5570,,,BETA_PDF,0.5,0.2; %matching function share of unemp
lambda,1.4256,0.9,,GAMMA_PDF,1.19,1; % steady state price markup
b,0.8320,,,BETA_PDF,0.5,0.1;   % habit formation in consumption
xi,0.5841,,,BETA_PDF,0.68,0.2; % Calvo prices
Spp,13.6684,1,,GAMMA_PDF,7.5,30; % second derivative of invest. adjustment costs
sigmaa,0.0760,0.001,,GAMMA_PDF,0.32,0.3; %capacity utilization costs
rhoR,0.8555,,,BETA_PDF,0.5,0.2; % interest rate smoothing
rpi,1.3552,1.0001,,GAMMA_PDF,1.69,.3; % coefficient on infla
ry,0.0359,,,GAMMA_PDF,0.08,0.1; % coefficient on GDP
rhomupsi,0.7279,,,BETA_PDF,0.5,0.2; % invest. tech. smoothing
rhomuz,0.1,,,BETA_PDF,0.5,0.2; % neutral tech. smoothing
rhoA,0.1,,,BETA_PDF,0.5,0.2; % station. tech. smoo.
rhoD,0.5,,,BETA_PDF,0.5,0.2; % unemp. benef. smoo.
rhog,0.1,,,BETA_PDF,0.5,0.2; % gov. expen. smoo.
rhoups,0.1,,,BETA_PDF,0.5,0.2; % station. invest. tech. smoo.
rhozeta,0.1,,,BETA_PDF,0.5,0.2; % prefer. shock smoo.
DSHARE,0.6682,,,BETA_PDF,0.5,0.2; % unemp. benef. as share of w (replacement ratio)
theta,1,0.01,,GAMMA_PDF,0.94,0.5; % inverst frisch labor elasticity
etah,0.5570,,,BETA_PDF,0.5,0.2; % workers' hours bargaining power
stderr R_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr R_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr R_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr muz_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr muz_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr muz_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr mupsi_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr mupsi_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr mupsi_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr A_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr A_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr A_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr D_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr D_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr D_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr g_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr g_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr g_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr ups_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr ups_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr ups_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr zeta_eps_s,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr zeta_eps_n4,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr zeta_eps_n8,0.1,0.01,3,INV_GAMMA_PDF,0.1,2;
stderr y_eps_me,0.001,0,1,uniform_pdf,0.01,0.01;
stderr w_eps_me,0.001,0,1,uniform_pdf,0.01,0.01;
end;

varobs dy dc di dh dg dip dunemp df dtfp dw dpi dR duk;

estimation(datafile=us_data,xls_sheet=delta,mh_drop=0.2,mh_replic=5000,mh_jscale=0.2,
mode_check);

By the way, those observables’s values are not percentage but real one.
untitled.pdf (9.96 KB)

Hi,

mode_compute=6 is very long optimization routine (based on a metropolis, it’s documented on the wiki), so I am not surprised by the timing you report. I don’t understand your problem. Did you crash the optimization (mode_compute=6) before it exits normally? In this case the value of mh_replic is not pertinent (i.e. it won’t affect the time spent in the optimization). You should read the section devoted to the optimization algorithms if you want to tune the parameters (see ncov-mh and scale-mh options here).

Best,
Stéphane.

I see. Thank you, Professor. I used “ctrl c” to stop the simulation before the optimization is over since I thought it took such long time and maybe there was sth wrong there.

All best,

Samson

As I said, there is most probably still a problem with the mapping from the data to your model variables. You cannot easily use the unlogged data. Your mode_check plots show that your mode-finding did not work at all. My hunch is that the large values indicate an the error code in the penalty function. Please make sure your observation equations and the data treatment are correct.

Thank you, Professor. I checked my data and revised it to be change rate. However, when I do estimation, the same problem above happened and the results of mode_check were almost as bad as before. Now I am doing mode_compute=6 and I will have to wait for some time before results are known. Is there any other potential mistake that could lead to my problem?

I finished mode_compute=6. But still there is no improvement and the same problem occurred.

I agree with Johannes, you should really check your data and the consistency of your data with the model’s observed variables. But before could you type

prior optimize;

after the estimation command (with mode_compute=0, mh_replic=0)? This will trigger the optimization of the prior density (i.e. data are not used) discarding the vectors of parameters for which the model misbehaves (non steady state, issues with BK conditions, …). If this optimization fails (i.e. if the estimated prior mode is far from the prior mode corresponding to the priors you have chosen) it means you have a problem with the model and/or the prior. Otherwise, the only remaining explanation is to be found in the data (or the consistency of the data with the model’s observed variables).

Best,
Stéphane.

Please try the following as well: calibrate your model to the best of your knowledge and simulate data from it using stoch_simul. Then compare the mean and standard deviation from the simulated data for the observables to the ones in your data set. They should be roughly similar.

I agree with Johannes, you should really check your data and the consistency of your data with the model’s observed variables. But before could you type

prior optimize;

after the estimation command (with mode_compute=0, mh_replic=0)? This will trigger the optimization of the prior density (i.e. data are not used) discarding the vectors of parameters for which the model misbehaves (non steady state, issues with BK conditions, …). If this optimization fails (i.e. if the estimated prior mode is far from the prior mode corresponding to the priors you have chosen) it means you have a problem with the model and/or the prior. Otherwise, the only remaining explanation is to be found in the data (or the consistency of the data with the model’s observed variables).

Best,
Stéphane.

I see. Thank you, Professor. I will check my model and observed variables carefully again.

All best,

Samson

Please try the following as well: calibrate your model to the best of your knowledge and simulate data from it using stoch_simul. Then compare the mean and standard deviation from the simulated data for the observables to the ones in your data set. They should be roughly similar.

Thank you, Professor. I will try this method.

All best,

Samson