Usual Bayesian estimation problem

Good afternoon,

Whilst mindful of the multitude of threads and posts concerning this I would yet courteously require some assistance; particularly in light of the absence of definitive solutions to the problem at hand.

Namely, how am I to surmount the negative definite Hessian matrix hindrance in this specific case (files attached)?

The model has been calibrated for the US à la Smets and Wouters (2005) by hinging upon their posteriors; it is basically identical to it aside from there being a few more variables and thus equations (which is withal not the source of troubles). I am rather confident as to its correct specification and cannot see how the priors would appear as unbecoming or fallacious for the given context.

I, moreover, have to no avail implemented the following options:
mode_compute=6 and 9;
prior_trunc=0;
utilised more and diverse data;
tried it for the Euro area of their paper.

Anew, assistance appreciated indeed: please take a look at it without redirecting for all has been read.
Gratitude.

AS

Could Mr. jpfeifer, or any of the moderators, kindly take a look at this and attempt providing an opinion?
Thanks much again.

Two initial comments:
[ul]
]I am doing this in my free time. That’s why I don’t care if it’s urgent. I also have my own urgent things to do. Urgency on your part usually means that you waited too long./]
]There is a reason why this is the usual estimation problem! Getting large DSGE models to run is hard. Finding and fixing the problems is something some people do for a living by writing papers about it. It takes a lot of time and often requires looking deep into the model. That’s why more than generic advice can hardly be provided./:m][/ul]

Having said that, there several problems in your case.

[ul]
]First, Your starting values are poor and your model is large. Thus, finding the mode is really hard and most mode-finders will fail. You might need to iteratively try to find the mode by loading the previous mode-file using the mode_file option and then continue with a different optimizer to finally find the mode./]

]Second, you are estimating many parameter with just three observables! This makes identification really hard. If you try to run the identification command, it complains about not having sufficient moments./]

*] Third, if you increase the number of autocorrelations used for identification analysis by using

you get

[quote]==== Identification analysis ====

Testing prior mean
Evaluating simulated moment uncertainty … please wait
Doing 288 replicas of length 300 periods.
Simulated moment uncertainty … done!

WARNING !!!
The rank of H (model) is deficient!

e_chi is not identified in the model!
[dJ/d(e_chi)=0 for all tau elements in the model solution!]
rho_chi is not identified in the model!
[dJ/d(rho_chi)=0 for all tau elements in the model solution!]

WARNING !!!
The rank of J (moments) is deficient!

e_chi is not identified by J moments!
[dJ/d(e_chi)=0 for all J moments!]
rho_chi is not identified by J moments!
[dJ/d(rho_chi)=0 for all J moments!]

[e_kappa,e_nu] are PAIRWISE collinear (with tol = 1.e-10) !
[e_psi,e_pit] are PAIRWISE collinear (with tol = 1.e-10) !
[rho_psi,rho_pit] are PAIRWISE collinear (with tol = 1.e-10) !

==== Identification analysis completed ====[/quote]

Thus, there are massive identification problems in your model that you need to solve. This can also be seen from the pathological mode check plots you get after a run of mode_compute=9./*]
[/ul]

Rather dejected by the initial reaction considering your customary briskness and efficacy: urgent as in dire, of subjective noteworthy importance, by no means exacted. Howbeit, effort and counsels very much relished and appreciated: thanks. I will endeavour adding observables, even though I have attempted utilising seven of them by making use of the simulated data from the calibrated exercise to no avail. Calibration well replicates the impulses responses I obtain from an SVAR by the way. Why do you presume the initial values as being unbefitting? . What would you suggest?
I have also attempted exploiting uniform distributions but the story is invariant.
Thanks once more. Cheers.

Poor starting values probably is phrased poorly. We don’t know the true parameters. Using other people’s priors usually is a good idea. However, insteady of trying your best guess from the calibrated model, you start at the prior means, which are often not your best point estimate. I am led to saying the starting values are poor by the fact that the initial likelihood is in the range of 10^7 and goes down to at least 10^3. That’s why it seems to be far away from the mode.

Please take the identification output serious. If a parameter is not identified in the model, you can add as many observables as you want. It is still not identified. Knowing that the derivative w.r.t. to a parameter is always 0 automatically implies that the Hessian will be singular

1 Like

Much obliged again for the informative rejoinder.

I shall strive to evoke a best guess from the calibrated version for the prior information characterisation, as you champion, as opposed to adopting the paper’s priors. Although, I fear little success in light of my attempt to estimate the sole first parameter with the calibrated version’s value as the mean only to then obtain the following:

[code]
MH: Multiple chains mode.
MH: Searching for initial values…
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 1000.
MH: average acceptation rate per chain :
0.0080 0.0070 0.0090

MH: Total number of Mh draws: 1000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 500.
MH: Finally I keep 500 draws.

MH: I’m computing the posterior mean and covariance… Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.000 2.0000 2.0000 2.0000 norm 0.7500[/code]

what would the problem with the modified harmonic mean estimator delineate?

Also, whensoever dependant upon the simulated data as observations (7 obs.) for the estimation procedure the same exercise (1 sole estimation) seems to spawn a hybrid result.
Namely, the values are unvaried and the posterior distribution is not at all displayed on the graph (attached) despite the seeming success:

[code]
MH: Multiple chains mode.
MH: Searching for initial values…
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 10000.
MH: average acceptation rate per chain :
0.6756 0.6791 0.6795

MCMC Diagnostics: Univariate convergence diagnostic, Brooks and Gelman (1998):
Parameter 1… Done!

MH: Total number of Mh draws: 10000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 5000.
MH: Finally I keep 5000 draws.

MH: I’m computing the posterior mean and covariance… Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
MH: Let me try again.
MH: Let me try again.
MH: Modified harmonic mean estimator, done!

ESTIMATION RESULTS

Log data density is -38.385303.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.000 2.0000 2.0000 2.0000 norm 0.7500[/code]

how could one appraise this?

Ultimately, you have endorsed the option of iterative mode finding in another recent post as well as in the present: would loading the MODEL NAME_mode file from an unsuccessful estimation procedure as the ones hereto discussed (negative definite Hessian; etc.) be implementable? If so, does there exist any particular mode finders sequence (4-9-6)?

You seem to have misinterpreted me. I said: keep the prior distribution at place, but change the starting value. Consider a TFP shock. The Smets/Wouter prior would be

estimated_params; rho_z, beta_pdf, 0.5, 0.2; end;
Dynare’s mode finder would then start at the prior mean of 0.5. Instead, models typically calibrate it to 0.9 or something like this. My recommendation was using

estimated_params; rho_z,0.9, beta_pdf, 0.5, 0.2; end;
which triggers starting at 0.9, but keeping the same prior.

For the second case with 7 observables: I would need to look at the mod-file.

Loading the mode-file is always possible, because you are only loading the previous starting values and then continue. The order does not matter. You try to find a better likelihood. If one mode-finder gets stuck, try another one. Sometimes number 8 is always very helpful.

Right, I see, thanks much.

Although, I had, as a matter of fact, even provided a customary starting value in addition to the standard prior information as you have suggested but things yet ultimately appeared hopeless. Forbye, through the usage of 4 real-world observables (as well as via 7 simulated counterparts) the estimation of one sole parameter engenders the bewildering ensuing:

[code]
Loading 67 observations from simdata.m

Initial value of the log posterior (or likelihood): -2501834623053198
bad gradient ------------------------


f at the beginning of new iteration, 2501834623053198.0000000000
Predicted improvement: 0.000000000
lambda = 1; f = 2501834623053198.0000000
Norm of dx 0
bad gradient ------------------------
Cliff. Perturbing search direction.
Predicted improvement: 0.000000000
lambda = 1; f = 2501834623053198.0000000
Norm of dx 0
bad gradient ------------------------

Improvement on iteration 1 = 0.000000000
improvement < crit termination
Objective function at mode: 2501834623053198.000000

MODE CHECK

Fval obtained by the minimization routine: 2501834623053198.000000

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

eta 2.000 2.4500 0.0000 95459922.3280 norm 0.7500

Log data density [Laplace approximation] is -2501834623053214.500000.

Warning: File ‘EAb/metropolis’ not found.

In CheckPath at 41
In metropolis_hastings_initialization at 62
In random_walk_metropolis_hastings at 69
In dynare_estimation_1 at 931
In dynare_estimation at 70
In EAb at 353
In dynare at 120
MH: Multiple chains mode.
MH: Searching for initial values…
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 2000.
MH: average acceptation rate per chain :
0.0040 0.0050 0.0035

MCMC Diagnostics: Univariate convergence diagnostic, Brooks and Gelman (1998):
Parameter 1… Done!

MH: Total number of Mh draws: 2000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 1000.
MH: Finally I keep 1000 draws.

MH: I’m computing the posterior mean and covariance… Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: Let me try again.
MH: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.000 2.4500 2.4500 2.4500 norm 0.7500[/code]

what is this modified harmonic mean estimator issue trying to convey?

Your model is stochastically singular. You have no non-zero variance shock with 4 observables.

Do you intend that the measurement equation necessitate either of the following modifications:

  • the inclusion of eleven observables provided the same number of structural shocks;
  • despite contrary acceptability, the elimination of seven structural shocks given four sole available observables?

I have, in the meanwhile, indeed used eleven simulated observables (generated from the calibrated exercise) only to nonetheless fall victim to the same old negative definite Hessian matrix encumbrance yet again (code attached). Is there truly no way left within to Dynare to arbitrarily render it positive a posteriori? For after all, in the Bayesian philosophy, subjective model idiosyncrasies (already quite mislaid throughout log-linearisation) are fundamental as to the peculiarity and lifelikeness of macroeconomic modelling, bending them to parameters estimation exigence seems rather paradoxical. Between the two, the specification should hardly be compromised.

In any case, whilst I shall strive to adjust my prior information (starting values, means and variances) alongside an iterative mode finding procedure as suggested, I would withal care to comprehend the meaning of “misspecified model” in the similar context of the following thread: “‘mode_check’ and ‘matrix must be positive definite’ problem”. In more detail, I do not believe my model to suffer from such issue (also partly because you did not assert so in the very first remark), although, so as to hedge against all compromising eventualities I would relish understanding whether it perhaps be the case. Thanks anew.

Any thoughts Dr. Pfeifer with respect to the above queries?
Regards.

You always need as many shocks as observables. Otherwise estimation is impossible, because there will be a linear combination of observables implying one particular realization of the shock. But with a continuous distribution, having only one particular shock value has likelihood 0 (i.e. the log-likelihood will always be minus infinity). This is the problem of stochastic singularity in a nutshell. If you don’t have as many structural shocks as observables, people typically use measurement errors to fill up to the number of observables.

Sure, thanks for the elucidation. Although, whilst theoretically acceptable, I retain the opposite problem, that being, too many shocks relative to observables: 11 vs 4. As mentioned, I’d need to get my hands upon more observations if not willing to compromise my model by diminishing the shocks amount. Anew, notwithstanding, I have utilised 11 simulated observables in the meantime but without any success; I am still presented with a negative definite Hessian matrix and the MCMC does not commence. My question is: seen the befitting model specification and the futile attempts in refining starting values, means and variances for my priors, is there not still (I recall you mentioned it existed) a way for Dynare to consent the user to render it positive a posteriori? Cheers.

What you can do to change the non-positive definite Hessian to a positive definite one is to replace the line

in the try-catch-statement of dynare_estimation_1.m by

[cholmat,negeigennvalues]=cholcov(hh,0); if negeigennvalues~=0 && ~isnan(negeigennvalues) [V,D] = eig(hh); D=abs(D); temp=diag(D); temp(temp<1e-8)=1e-8; D=diag(temp); hh=V*D*V'; [hh,negeigenvalues1]=cholcov(hh,0); end

Any eigenvalue smaller equal to 0 is set to a small positive number.

LET ME REPEAT FOR OTHER PEOPLE READING THIS: THIS IS NOT RECOMMENDED AND DOES IN NO WAY ASSURE CORRECT RESULTS.

A better, but more complicated way would be to use Gill/King (2004) gking.harvard.edu/files/abs/help-abs.shtml. This assures a Hessian that is in some sense very close to the presumed true one (which also implies that the Hessian you are trying to approximate is close to the one at the true mode and not really far off, because the model still has other issues or the starting point for the MC sampler is far off from the mode).

Alternatively, you could just set hh to a small identity matrix and start the MCMC. My guess is that the sampler will slowly move to a more likely posterior region and thus bring you closer to the true mode where you might want to restart the mode-finder (aka doing an even more prolonged run of mode_compute=6)

Sincere gratitude. Although, out of sheer scrupulosity, need the code assume the modification as follows:

if ~options_.mh_posterior_mode_estimation && options_.cova_compute try chol(hh); catch disp(' ') disp('POSTERIOR KERNEL OPTIMIZATION PROBLEM!') disp(' (minus) the hessian matrix at the "mode" is not positive definite!') disp('=> posterior variance of the estimated parameters are not positive.') disp('You should try to change the initial values of the parameters using') disp('the estimated_params_init block, or use another optimization routine.') warning('The results below are most likely wrong!'); end end

with

[code]if ~options_.mh_posterior_mode_estimation && options_.cova_compute
[cholmat,negeigennvalues]=cholcov(hh,0);
if negeigennvalues~=0 && ~isnan(negeigennvalues)
[V,D] = eig(inv_hessian);
D=abs(D);
temp=diag(D);
temp(temp<1e-8)=1e-8;
D=diag(temp);
inv_hessian=VDV’;
[hh,negeigenvalues1]=cholcov(hh,0);
end

end[/code]

by hence dispensing with the whole try and catch command, or need it be done otherwise? I ask so for following modification as such, throughout the estimation procedure, the ensuing notice materialises:

[code]Undefined function or variable “inv_hessian”.

Error in dynare_estimation_1 (line 473)
[V,D] = eig(inv_hessian);

Error in dynare_estimation (line 70)
dynare_estimation_1(var_list,dname);

Error in EAb (line 389)
dynare_estimation(var_list_);

Error in dynare (line 120)
evalin(‘base’,fname) ;[/code]

what malfunctions?

Moreover, how is one to tangibly set hh to a small identity matrix? Do you intend “restarting the mode finder” as in implementing the iterative procedure by loading the mode file?

I corrected the above code. The variable naming was not consistent.

To set it to a small identity, just set hh=1e-4*eye(size(hh))

Dr. Pfeifer: I’m still working on it, besides holidays have interfered. I hope that my next post shall but encompass glee. Howbeit, sincere gratitude for all you’ve set forth hereto. Cheers.

Not so gleeful after all upon return. Anyways, whilst utilising the modified “dynare_estimation_1.m” code version

if ~options_.mh_posterior_mode_estimation && options_.cova_compute [cholmat,negeigennvalues]=cholcov(hh,0); if negeigennvalues~=0 && ~isnan(negeigennvalues) [V,D] = eig(hh); D=abs(D); temp=diag(D); temp(temp<1e-8)=1e-8; D=diag(temp); hh=1e-4*eye(size(hh)); [hh,negeigenvalues1]=cholcov(hh,0); end

en lieu of the classic one

if ~options_.mh_posterior_mode_estimation && options_.cova_compute try chol(hh); catch disp(' ') disp('POSTERIOR KERNEL OPTIMIZATION PROBLEM!') disp(' (minus) the hessian matrix at the "mode" is not positive definite!') disp('=> posterior variance of the estimated parameters are not positive.') disp('You should try to change the initial values of the parameters using') disp('the estimated_params_init block, or use another optimization routine.') warning('The results below are most likely wrong!'); end end

I have come across the following hindrance when exploiting mode_compute=4.

[code]Configuring Dynare …
[mex] Generalized QZ.
[mex] Sylvester equation solution.
[mex] Kronecker products.
[mex] Sparse kronecker products.
[mex] Local state space iteration (second order).
[mex] Bytecode evaluation.
[mex] k-order perturbation solver.
[mex] k-order solution simulation.
[mex] Quasi Monte-Carlo sequence (Sobol).
[mex] Markov Switching SBVAR.

Starting Dynare (version 4.3.3).
Starting preprocessing of the model file …
Substitution of endo lags >= 2: added 3 auxiliary variables and equations.
Found 20 equation(s).
Evaluating expressions…done
Computing static model derivatives:

  • order 1
    Computing dynamic model derivatives:
  • order 1
  • order 2
    Processing outputs …done
    Preprocessing completed.
    Starting MATLAB/Octave computing.

You did not declare endogenous variables after the estimation/calib_smoother command.
Prior distribution for parameter rho_g has two modes!
Warning: File ‘usvs1/prior’ not found.

In CheckPath at 41
In set_prior at 264
In dynare_estimation_init at 123
In dynare_estimation_1 at 59
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Loading 67 observations from simdata.m

Initial value of the log posterior (or likelihood): -16917479858827.47


f at the beginning of new iteration, 16917479858827.4726562500
Predicted improvement: 725111275279083473207296.000000000
lambda = 1; f = 354799973695706816.0000000
lambda = 0.33333; f = 39437255816629904.0000000
lambda = 0.11111; f = 4396954658578081.5000000
lambda = 0.037037; f = 503588139409747.3125000
lambda = 0.012346; f = 70991951559073.2187500
lambda = 0.0041152; f = 22925739150193.5273438
lambda = 0.0013717; f = 17585059111086.7675781
lambda = 0.00045725; f = 16991653627368.1953125
lambda = 0.00015242; f = 16925720821237.9707031
lambda = 5.0805e-05; f = 16918395332725.2246094
lambda = 1.6935e-05; f = 16917581515835.4941406
lambda = 5.645e-06; f = 16917491133829.4179688
lambda = 1.8817e-06; f = 16917481105415.2265625
lambda = 6.2723e-07; f = 16917479995823.9375000
lambda = 2.0908e-07; f = 16917479873822.2539062
lambda = 6.9692e-08; f = 16917479860426.5898438
lambda = 2.3231e-08; f = 16917479858984.1835938
lambda = 7.7435e-09; f = 16917479858839.1464844
lambda = 2.5812e-09; f = 16917479858827.7714844

lambda =

-6.2723e-07

lambda = -6.2723e-07; f = 16917536766842.5214844
lambda = -2.0908e-07; f = 16917486180643.0429688
lambda = -6.9692e-08; f = 16917480560822.5234375
lambda = -2.3231e-08; f = 16917479936685.8984375
lambda = -7.7435e-09; f = 16917479867432.6699219
lambda = -2.5812e-09; f = 16917479859769.2148438
Norm of dx 1.2043e+10

Improvement on iteration 1 = 0.000000000
improvement < crit termination
smallest step still improving too slow, reversed gradient
Objective function at mode: 16917479858827.472656

MODE CHECK

Fval obtained by the minimization routine: 16917479858827.472656

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

eta 2.450 2.4500 10.0000 0.2450 norm 0.7500
sigma_c 1.620 1.6200 10.0000 0.1620 norm 0.3750
h 0.690 0.6900 10.0000 0.0690 beta 0.1000
omicron 5.860 5.8600 10.0000 0.5860 norm 2.0000
omega 3.230 3.2300 10.0000 0.3230 norm 0.1000
rho_rn 0.880 0.8800 10.0000 0.0880 norm 0.1000
phi_pi 1.480 1.4800 10.0000 0.1480 norm 0.1000
phi_y 0.080 0.0800 10.0000 0.0080 norm 0.0500
tau 0.660 0.6600 10.0000 0.0660 beta 0.1000
xi 0.870 0.8700 10.0000 0.0870 beta 0.1000
rho_kappa 0.490 0.4900 10.0000 0.0490 beta 0.1000
rho_z 0.750 0.7500 10.0000 0.0750 beta 0.1000
rho_s 0.866 0.8660 10.0000 0.0866 beta 0.1000
rho_a 0.822 0.8220 10.0000 0.0822 beta 0.1000
rho_vphi 0.700 0.7000 10.0000 0.0700 beta 0.1000
rho_g 0.980 0.9800 10.0000 0.0980 beta 0.1000
standard deviation of shocks
prior mean mode s.d. t-stat prior pstdev

e_kappa 0.250 0.2500 10.0000 0.0250 invg Inf
e_z 0.250 0.2500 10.0000 0.0250 invg Inf
e_a 0.250 0.2500 10.0000 0.0250 invg Inf
e_s 0.250 0.2500 10.0000 0.0250 invg Inf
e_vphi 0.250 0.2500 10.0000 0.0250 invg Inf
e_g 0.250 0.2500 10.0000 0.0250 invg Inf

Log data density [Laplace approximation] is -16917479858756.599609.

Warning: File ‘usvs1/metropolis’ not found.

In CheckPath at 41
In metropolis_hastings_initialization at 62
In random_walk_metropolis_hastings at 69
In dynare_estimation_1 at 931
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Multiple chains mode.
MH: Searching for initial values…
MH: I couldn’t get a valid initial value in 100 trials.
MH: You should Reduce mh_init_scale…
MH: Parameter mh_init_scale is equal to 0.400000.
MH: Enter a new value… 0.01
MH: I couldn’t get a valid initial value in 100 trials.
MH: You should Reduce mh_init_scale…
MH: Parameter mh_init_scale is equal to 0.010000.
MH: Enter a new value… 0.009
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 1000.
MH: average acceptation rate per chain :
0 0 0

MH: Total number of Mh draws: 1000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 500.
MH: Finally I keep 500 draws.

MH: I’m computing the posterior mean and covariance… Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.048776e-20.

In compute_mh_covariance_matrix at 74
In marginal_density at 50
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.048776e-20.
In marginal_density at 56
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.048776e-20.

In marginal_density at 67
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.766881e-18.
In marginal_density at 102
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.628578e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.020776e-19.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 3.854419e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.434634e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 3.001022e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.185226e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 2.664325e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.092922e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 2.740423e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.211734e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 8.509851e-19.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 5.124496e-19.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 1.230393e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 4.619124e-19.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 2.226457e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 9.653391e-19.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.450 2.5128 2.4941 2.5234 norm 0.7500
sigma_c 1.620 1.5726 1.5293 1.6490 norm 0.3750
h 0.690 0.7136 0.6796 0.7588 beta 0.1000
omicron 5.860 5.8657 5.8194 5.8977 norm 2.0000
omega 3.230 3.1770 3.1271 3.2162 norm 0.1000
rho_rn 0.880 0.8338 0.8009 0.8599 norm 0.1000
phi_pi 1.480 1.3702 1.2954 1.4627 norm 0.1000
phi_y 0.080 0.0732 0.0564 0.0975 norm 0.0500
tau 0.660 0.7306 0.6805 0.8035 beta 0.1000
xi 0.870 0.8748 0.8302 0.9003 beta 0.1000
rho_kappa 0.490 0.4879 0.4520 0.5515 beta 0.1000
rho_z 0.750 0.8181 0.7703 0.8459 beta 0.1000
rho_s 0.866 0.8520 0.8303 0.8792 beta 0.1000
rho_a 0.822 0.8696 0.8229 0.9085 beta 0.1000
rho_vphi 0.700 0.6995 0.5958 0.7931 beta 0.1000
rho_g 0.980 0.9080 0.7955 0.9659 beta 0.1000

standard deviation of shocks
prior mean post. mean conf. interval prior pstdev

e_kappa 0.250 0.2046 0.1689 0.2707 invg Inf
e_z 0.250 0.2836 0.1910 0.3328 invg Inf
e_a 0.250 0.3017 0.2543 0.3651 invg Inf
e_s 0.250 0.4057 0.3421 0.4611 invg Inf
e_vphi 0.250 0.2365 0.1649 0.3108 invg Inf
e_g 0.250 0.3230 0.2239 0.4111 invg Inf
Warning: BETAINV did not converge for a = 0.9408, b = 0.0192, p = 0.999.

In betainv at 61
In draw_prior_density at 47
In PlotPosteriorDistributions at 80
In dynare_estimation_1 at 951
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Total computing time : 0h00m52s[/code]

It need be recollected that the model’s structural shocks have diminished to six: equalling the amount of observed variables, thus annihilating the erstwhile raised issue of stochastic singularity. Why do you believe to be there yet an issue with the modified harmonic mean estimator? Withal, mode_compute=6 doesn’t even work. As mentioned in a previous post of mine, ulterior model re-specification would but integrally belittle the underlying research question; I would just wish to estimate these parameters in one way or another: how is one to surmount this all? Thank you.

Judging from the extremely high initial likelihood, your model did not solve correctly at all. Try providing initial values that lead to the model at least initially being solved.

The initial values well suit the model, particularly in the orbit of calibration: the impulse response functions are extremely lifelike. Why is it generating a matrix close to singular (or badly scaled)? And why isn’t mode_compute=6 not even working any longer? Gill & King (2004) do bear a valid point on computer question relevance discernment inability. Howbeit, could the file be courteously observed?