Dear Dr.,
I have four questions here related to estimation and identification:
- When I ran Bayesian estimation, only
mode_compute=6
works. If I applymode_compute=4
, for example, I receive the following error:
Initial value of the log posterior (or likelihood): -70885352.0246
f at the beginning of new iteration, 70885352.0246478620
Predicted improvement: 314520590462490300.000000000`
lambda = 1; f = 813385563.7640533
lambda = 0.33333; f = 153375384.0570612
lambda = 0.11111; f = 80047636.5492921
lambda = 0.037037; f = 71902347.8782074
lambda = 0.012346; f = 70998020.2210133
lambda = 0.0041152; f = 70897762.0902082
lambda = 0.0013717; f = 70886695.8696246
lambda = 0.00045725; f = 70885490.5093615
lambda = 0.00015242; f = 70885364.4594246
lambda = 5.0805e-05; f = 70885352.7268774
lambda = 1.6935e-05; f = 70887257.7117893
lambda = 5.645e-06; f = 70885352.0265793
lambda = 1.8817e-06; f = 70885352.0248372
lambda = 6.2723e-07; f = 70885352.0246680
lambda = 2.0908e-07; f = 70885352.0246501
lambda = 6.9692e-08; f = 70885352.0246481
lambda = 2.3231e-08; f = 70885352.0246479
lambda = 7.7435e-09; f = 70885352.0246479
lambda = 2.5812e-09; f = 70885352.0246479
lambda =
-6.2723e-07
lambda = -6.2723e-07; f = 70885376.7557529
lambda = -2.0908e-07; f = 70885354.7690719
lambda = -6.9692e-08; f = 70885352.3284281
lambda = -2.3231e-08; f = 70885352.0580187
lambda = -7.7435e-09; f = 70885352.0282309
lambda = -2.5812e-09; f = 70885352.0250064
Norm of dx 7.9312e+06
----
Improvement on iteration 1 = 0.000000000
improvement < crit termination
smallest step still improving too slow, reversed gradient
Final value of minus the log posterior (or likelihood):70885352.024648
POSTERIOR KERNEL OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the "mode" is not positive definite!
=> posterior variance of the estimated parameters are not positive.
You should try to change the initial values of the parameters using
the estimated_params_init block, or use another optimization routine.
Warning: The results below are most likely wrong!
> In dynare_estimation_1 at 316
In dynare_estimation at 105
In NK_bm_estimation at 630
In dynare at 223
MODE CHECK
Fval obtained by the minimization routine (minus the posterior/likelihood)): 70885352.024648
Most negative variance -9192556.781705 for parameter 20 (psi_taun = 0.150000)
Warning: Matrix is singular, close to singular or badly scaled. Results may be inaccurate.
RCOND = NaN.
> In dynare_estimation_1 at 339
In dynare_estimation at 105
In NK_bm_estimation at 630
In dynare at 223
RESULTS FROM POSTERIOR ESTIMATION
parameters
prior mean mode s.d. prior pstdev
sig_a 0.001 0.0010 NaN invg 0.0100
sig_betta 0.001 0.0010 NaN invg 0.0100
sig_invest 0.001 0.0010 NaN invg 0.0100
sig_r 0.001 0.0010 NaN invg 0.0100
sig_g 0.001 0.0010 NaN invg 0.0100
sig_taun 0.001 0.0010 NaN invg 0.0100
sig_tauk 0.001 0.0010 NaN invg 0.0100
var_a 0.500 0.5000 NaN beta 0.2000
var_betta 0.500 0.5000 NaN beta 0.2000
var_invest 0.500 0.5000 NaN beta 0.2000
var_r 0.500 0.5000 NaN beta 0.2000
var_g 0.500 0.5000 NaN beta 0.2000
var_taun 0.500 0.5000 NaN beta 0.2000
var_tauk 0.500 0.5000 NaN beta 0.2000
rhho_g 0.500 0.5000 NaN beta 0.2000
rhho_taun 0.500 0.5000 NaN beta 0.2000
rhho_tauk 0.500 0.5000 NaN beta 0.2000
rhho_z 0.500 0.5000 NaN beta 0.2000
psi_g 0.150 0.1500 NaN norm 0.1000
psi_taun 0.150 0.1500 NaN norm 0.1000
psi_tauk 0.150 0.1500 NaN norm 0.1000
psi_z 0.150 0.1500 NaN norm 0.1000
cbetta 0.715 0.7151 NaN gamm 0.1000
h 0.700 0.7000 NaN beta 0.1000
etta 2.000 2.0000 NaN norm 0.5000
zetta2 0.500 0.5000 NaN beta 0.1500
kapa 4.000 4.0000 NaN norm 1.5000
iota_p 0.500 0.5000 NaN beta 0.1000
omegga 0.500 0.5000 NaN beta 0.1000
ctrend 0.370 0.3700 NaN norm 0.1000
cpie 0.360 0.3600 NaN gamm 0.1000
phi_pie 1.500 1.5000 NaN norm 0.1250
phi_y 0.120 0.1200 NaN norm 0.0500
rhho_r 0.750 0.7500 NaN beta 0.1000
Log data density [Laplace approximation] is NaN.
Error using chol
Matrix must be positive definite with real diagonal.
Error in posterior_sampler_initialization (line 84)
d = chol(vv);
Error in posterior_sampler (line 59)
[ ix2, ilogpo2, ModelName, MetropolisFolder,
fblck, fline, npar, nblck, nruns, NewFile,
MAX_nruns, d, bayestopt_] = ...
Error in dynare_estimation_1 (line 447)
posterior_sampler(objective_function,posterior_sampler_options.proposal_distribution,xparam1,posterior_sampler_options,bounds,dataset_,dataset_info,options_,M_,estim_params_,bayestopt_,oo
Error in dynare_estimation (line
105)
dynare_estimation_1(var_list,dname);
Error in NK_bm_estimation (line 630)
oo_recursive_=dynare_estimation(var_list_);
Error in dynare (line 223)
evalin('base',fname) ; `
I also receive errors when I apply other mode_compute
. May I ask whether this indicates problems in my model or I just need to follow the one working for my model?
-
When I apply
mode_compute=4, mode_check
, the mode_check graphs look very weird but when I applymode_compute=6, mode_check
, it’s improved that graphs look similar with yours inAn Introduction to Graphs in Dynare
paper although there are still some parameters not good. May I ask why graphs are so different with differentmode_compute
? In your paper, you saidIdeally, the estimated mode should be at the maximum of the posterior likelihood.
If some of the parameters likevar_tauk
in figure 2 don’t follow this, how should I fix the problem? Besides, as you can see, I have two parameters with totally Big red dots. I checked the forum and you once told someone else that one solution is to rewrite the model. Since these two parameters are related to tax but I don’t have tax revenue as observables, May I ask if I change observables by introducing taxes, will this help? -
In the other attached pdf you can see that the convergence of my model is pretty bad. With the help of your paper
An Introduction to Graphs in Dynare
I know thatIf the chains have converged, the two lines should stabilize horizontally and should be close to each other.
However my is not. Is this due to number of iterations too small (I’m usingmh_replic=100000
) or some other issues? How can I fix this? -
Graphs in question 3 are results when I apply
mode_compute=6, mh_replic=100000
from which the A-R ratio is around 30%. When I increasemh_replic
to500,000
, however, the A-R ratio declines dramatically around only 8% like that. I’m always wondering how to know whether the bayesian estimation results are successful or not. I feel like only looking at A-R ratio is far away enough. Thus, may I ask in general how I can know that my estimation results make sense?
mode_check_figs.pdf (54.5 KB)
convergence_figs.pdf (180.0 KB)
I also message you my mod file. My questions are long. Highly appreciate your favor!
Best,