Usual Bayesian estimation problem

But you are not providing those calibrated values as initial values for calibration. Thus, the prior mean is taken. See the manual on how to provide starting values for estimation.

I do ken what you mean but I guess we have been through this:

eta, 2.45, normal_pdf, 2, 0.75;

with 2.45 as the initial, starting or commencing value for the full information estimation procedure under scrutiny. Nevertheless, the outcome varies not.

[code]Configuring Dynare …
[mex] Generalized QZ.
[mex] Sylvester equation solution.
[mex] Kronecker products.
[mex] Sparse kronecker products.
[mex] Local state space iteration (second order).
[mex] Bytecode evaluation.
[mex] k-order perturbation solver.
[mex] k-order solution simulation.
[mex] Quasi Monte-Carlo sequence (Sobol).
[mex] Markov Switching SBVAR.

Starting Dynare (version 4.3.3).
Starting preprocessing of the model file …
Substitution of endo lags >= 2: added 3 auxiliary variables and equations.
Found 20 equation(s).
Evaluating expressions…done
Computing static model derivatives:

  • order 1
    Computing dynamic model derivatives:
  • order 1
  • order 2
    Processing outputs …done
    Preprocessing completed.
    Starting MATLAB/Octave computing.

You did not declare endogenous variables after the estimation/calib_smoother command.
Prior distribution for parameter rho_g has two modes!
Warning: File ‘usvs1/prior’ not found.

In CheckPath at 41
In set_prior at 264
In dynare_estimation_init at 123
In dynare_estimation_1 at 59
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Loading 67 observations from simdata.m

Initial value of the log posterior (or likelihood): -16066322706048.12


f at the beginning of new iteration, 16066322706048.1230468750
Predicted improvement: 613049708085010484953088.000000000
lambda = 1; f = 887739114016133888.0000000
lambda = 0.33333; f = 98651958521113696.0000000
lambda = 0.11111; f = 10975609237057046.0000000
lambda = 0.037037; f = 1233793092106692.0000000
lambda = 0.012346; f = 151369223390114.9062500
lambda = 0.0041152; f = 31099953773500.9921875
lambda = 0.0013717; f = 17736717970434.7265625
lambda = 0.00045725; f = 16251919451168.7500000
lambda = 0.00015242; f = 16086943657534.4746094
lambda = 5.0805e-05; f = 16068613620336.5000000
lambda = 1.6935e-05; f = 16066577151716.4472656
lambda = 5.645e-06; f = 16066350944818.5703125
lambda = 1.8817e-06; f = 16066325833183.2207031
lambda = 6.2723e-07; f = 16066323050489.6523438
lambda = 2.0908e-07; f = 16066322743718.1386719
lambda = 6.9692e-08; f = 16066322710082.0253906
lambda = 2.3231e-08; f = 16066322706451.1367188
lambda = 7.7435e-09; f = 16066322706081.0820312
lambda = 2.5812e-09; f = 16066322706049.3125000

lambda =

-6.2723e-07

lambda = -6.2723e-07; f = 16066370588169.4121094
lambda = -2.0908e-07; f = 16066328025234.1621094
lambda = -6.9692e-08; f = 16066323296720.5039062
lambda = -2.3231e-08; f = 16066322771563.8730469
lambda = -7.7435e-09; f = 16066322713291.0605469
lambda = -2.5812e-09; f = 16066322706841.7070312
Norm of dx 1.1073e+10

Improvement on iteration 1 = 0.000000000
improvement < crit termination
smallest step still improving too slow, reversed gradient
Objective function at mode: 16066322706048.123047

MODE CHECK

Fval obtained by the minimization routine: 16066322706048.123047

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

eta 2.450 2.0000 10.0000 0.2000 norm 0.7500
sigma_c 1.620 1.0000 10.0000 0.1000 norm 0.3750
h 0.690 0.7000 10.0000 0.0700 beta 0.1000
omicron 5.860 4.0000 10.0000 0.4000 norm 2.0000
omega 3.230 5.0000 10.0000 0.5000 norm 0.1000
rho_rn 0.880 0.7500 10.0000 0.0750 norm 0.1000
phi_pi 1.480 1.5000 10.0000 0.1500 norm 0.1000
phi_y 0.080 0.1250 10.0000 0.0125 norm 0.0500
tau 0.660 0.7700 10.0000 0.0770 beta 0.1000
xi 0.870 0.7500 10.0000 0.0750 beta 0.1000
rho_kappa 0.490 0.8500 10.0000 0.0850 beta 0.1000
rho_z 0.750 0.8500 10.0000 0.0850 beta 0.1000
rho_s 0.866 0.8500 10.0000 0.0850 beta 0.1000
rho_a 0.822 0.8500 10.0000 0.0850 beta 0.1000
rho_vphi 0.700 0.8500 10.0000 0.0850 beta 0.1000
rho_g 0.980 0.8500 10.0000 0.0850 beta 0.1000
standard deviation of shocks
prior mean mode s.d. t-stat prior pstdev

e_kappa 0.250 0.2500 10.0000 0.0250 invg Inf
e_z 0.250 0.2500 10.0000 0.0250 invg Inf
e_a 0.250 0.2500 10.0000 0.0250 invg Inf
e_s 0.250 0.2500 10.0000 0.0250 invg Inf
e_vphi 0.250 0.2500 10.0000 0.0250 invg Inf
e_g 0.250 0.2500 10.0000 0.0250 invg Inf

Log data density [Laplace approximation] is -16066322705977.250000.

Warning: File ‘usvs1/metropolis’ not found.

In CheckPath at 41
In metropolis_hastings_initialization at 62
In random_walk_metropolis_hastings at 69
In dynare_estimation_1 at 931
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Multiple chains mode.
MH: Searching for initial values…
MH: I couldn’t get a valid initial value in 100 trials.
MH: You should Reduce mh_init_scale…
MH: Parameter mh_init_scale is equal to 0.400000.
MH: Enter a new value… 0.01
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 1000.
MH: average acceptation rate per chain :
0 0 0

MH: Total number of Mh draws: 1000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 500.
MH: Finally I keep 500 draws.

MH: I’m computing the posterior mean and covariance… Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.

In compute_mh_covariance_matrix at 74
In marginal_density at 50
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.
In marginal_density at 56
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.

In marginal_density at 67
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.586269e-18.
In marginal_density at 102
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.592658e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.168076e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.786268e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.834621e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.125031e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.781055e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.221069e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.115450e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.977791e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.016878e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.455596e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.778913e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.844161e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.910434e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.745399e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.237027e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.450 2.0414 1.9497 2.1627 norm 0.7500
sigma_c 1.620 1.0338 0.9648 1.0706 norm 0.3750
h 0.690 0.7590 0.7120 0.8417 beta 0.1000
omicron 5.860 3.9508 3.8862 3.9914 norm 2.0000
omega 3.230 5.0411 5.0301 5.0482 norm 0.1000
rho_rn 0.880 0.8121 0.7024 0.8801 norm 0.1000
phi_pi 1.480 1.5601 1.4707 1.7117 norm 0.1000
phi_y 0.080 0.1868 0.1349 0.2436 norm 0.0500
tau 0.660 0.7389 0.6294 0.8507 beta 0.1000
xi 0.870 0.8081 0.7207 0.8520 beta 0.1000
rho_kappa 0.490 0.7700 0.7154 0.8380 beta 0.1000
rho_z 0.750 0.8773 0.7800 0.9973 beta 0.1000
rho_s 0.866 0.8493 0.6923 0.9485 beta 0.1000
rho_a 0.822 0.7998 0.7452 0.8788 beta 0.1000
rho_vphi 0.700 0.8193 0.7023 0.8815 beta 0.1000
rho_g 0.980 0.8206 0.7586 0.9122 beta 0.1000

standard deviation of shocks
prior mean post. mean conf. interval prior pstdev

e_kappa 0.250 0.2957 0.1881 0.3978 invg Inf
e_z 0.250 0.1991 0.1272 0.2554 invg Inf
e_a 0.250 0.3307 0.0653 0.4636 invg Inf
e_s 0.250 0.1772 0.1680 0.1884 invg Inf
e_vphi 0.250 0.2952 0.2244 0.4014 invg Inf
e_g 0.250 0.2269 0.1038 0.3184 invg Inf
Warning: BETAINV did not converge for a = 0.9408, b = 0.0192, p = 0.999.

In betainv at 61
In draw_prior_density at 47
In PlotPosteriorDistributions at 80
In dynare_estimation_1 at 951
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Total computing time : 0h00m50s[/code]

What does this technically mean: “MH: There’s probably a problem with the modified harmonic mean estimator.”?

As always, post the updated mod-file with the data-file

There files are herewith.
What is technically thwarting the estimation procedure?
What are: “MH: There’s probably a problem with the modified harmonic mean estimator.” and “Log data density is -Inf.” striving to transmit?

[code]Configuring Dynare …
[mex] Generalized QZ.
[mex] Sylvester equation solution.
[mex] Kronecker products.
[mex] Sparse kronecker products.
[mex] Local state space iteration (second order).
[mex] Bytecode evaluation.
[mex] k-order perturbation solver.
[mex] k-order solution simulation.
[mex] Quasi Monte-Carlo sequence (Sobol).
[mex] Markov Switching SBVAR.

Starting Dynare (version 4.3.3).
Starting preprocessing of the model file …
Substitution of endo lags >= 2: added 3 auxiliary variables and equations.
Found 20 equation(s).
Evaluating expressions…done
Computing static model derivatives:

  • order 1
    Computing dynamic model derivatives:
  • order 1
  • order 2
    Processing outputs …done
    Preprocessing completed.
    Starting MATLAB/Octave computing.

You did not declare endogenous variables after the estimation/calib_smoother command.
Prior distribution for parameter rho_g has two modes!
Warning: File ‘usvs1/prior’ not found.

In CheckPath at 41
In set_prior at 264
In dynare_estimation_init at 123
In dynare_estimation_1 at 59
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Loading 67 observations from simdata.m

Initial value of the log posterior (or likelihood): -16066322706048.12


f at the beginning of new iteration, 16066322706048.1230468750
Predicted improvement: 613049708085010484953088.000000000
lambda = 1; f = 887739114016133888.0000000
lambda = 0.33333; f = 98651958521113696.0000000
lambda = 0.11111; f = 10975609237057046.0000000
lambda = 0.037037; f = 1233793092106692.0000000
lambda = 0.012346; f = 151369223390114.9062500
lambda = 0.0041152; f = 31099953773500.9921875
lambda = 0.0013717; f = 17736717970434.7265625
lambda = 0.00045725; f = 16251919451168.7500000
lambda = 0.00015242; f = 16086943657534.4746094
lambda = 5.0805e-05; f = 16068613620336.5000000
lambda = 1.6935e-05; f = 16066577151716.4472656
lambda = 5.645e-06; f = 16066350944818.5703125
lambda = 1.8817e-06; f = 16066325833183.2207031
lambda = 6.2723e-07; f = 16066323050489.6523438
lambda = 2.0908e-07; f = 16066322743718.1386719
lambda = 6.9692e-08; f = 16066322710082.0253906
lambda = 2.3231e-08; f = 16066322706451.1367188
lambda = 7.7435e-09; f = 16066322706081.0820312
lambda = 2.5812e-09; f = 16066322706049.3125000

lambda =

-6.2723e-07

lambda = -6.2723e-07; f = 16066370588169.4121094
lambda = -2.0908e-07; f = 16066328025234.1621094
lambda = -6.9692e-08; f = 16066323296720.5039062
lambda = -2.3231e-08; f = 16066322771563.8730469
lambda = -7.7435e-09; f = 16066322713291.0605469
lambda = -2.5812e-09; f = 16066322706841.7070312
Norm of dx 1.1073e+10

Improvement on iteration 1 = 0.000000000
improvement < crit termination
smallest step still improving too slow, reversed gradient
Objective function at mode: 16066322706048.123047

MODE CHECK

Fval obtained by the minimization routine: 16066322706048.123047

RESULTS FROM POSTERIOR MAXIMIZATION
parameters
prior mean mode s.d. t-stat prior pstdev

eta 2.450 2.0000 10.0000 0.2000 norm 0.7500
sigma_c 1.620 1.0000 10.0000 0.1000 norm 0.3750
h 0.690 0.7000 10.0000 0.0700 beta 0.1000
omicron 5.860 4.0000 10.0000 0.4000 norm 2.0000
omega 3.230 5.0000 10.0000 0.5000 norm 0.1000
rho_rn 0.880 0.7500 10.0000 0.0750 norm 0.1000
phi_pi 1.480 1.5000 10.0000 0.1500 norm 0.1000
phi_y 0.080 0.1250 10.0000 0.0125 norm 0.0500
tau 0.660 0.7700 10.0000 0.0770 beta 0.1000
xi 0.870 0.7500 10.0000 0.0750 beta 0.1000
rho_kappa 0.490 0.8500 10.0000 0.0850 beta 0.1000
rho_z 0.750 0.8500 10.0000 0.0850 beta 0.1000
rho_s 0.866 0.8500 10.0000 0.0850 beta 0.1000
rho_a 0.822 0.8500 10.0000 0.0850 beta 0.1000
rho_vphi 0.700 0.8500 10.0000 0.0850 beta 0.1000
rho_g 0.980 0.8500 10.0000 0.0850 beta 0.1000
standard deviation of shocks
prior mean mode s.d. t-stat prior pstdev

e_kappa 0.250 0.2500 10.0000 0.0250 invg Inf
e_z 0.250 0.2500 10.0000 0.0250 invg Inf
e_a 0.250 0.2500 10.0000 0.0250 invg Inf
e_s 0.250 0.2500 10.0000 0.0250 invg Inf
e_vphi 0.250 0.2500 10.0000 0.0250 invg Inf
e_g 0.250 0.2500 10.0000 0.0250 invg Inf

Log data density [Laplace approximation] is -16066322705977.250000.

Warning: File ‘usvs1/metropolis’ not found.

In CheckPath at 41
In metropolis_hastings_initialization at 62
In random_walk_metropolis_hastings at 69
In dynare_estimation_1 at 931
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Multiple chains mode.
MH: Searching for initial values…
MH: I couldn’t get a valid initial value in 100 trials.
MH: You should Reduce mh_init_scale…
MH: Parameter mh_init_scale is equal to 0.400000.
MH: Enter a new value… 0.01
MH: Initial values found!

MH: Number of mh files : 1 per block.
MH: Total number of generated files : 3.
MH: Total number of iterations : 1000.
MH: average acceptation rate per chain :
0 0 0

MH: Total number of Mh draws: 1000.
MH: Total number of generated Mh files: 1.
MH: I’ll use mh-files 1 to 1.
MH: In mh-file number 1 i’ll start at line 500.
MH: Finally I keep 500 draws.

MH: I’m computing the posterior mean and covariance… Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.

In compute_mh_covariance_matrix at 74
In marginal_density at 50
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.
In marginal_density at 56
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Done!

MH: I’m computing the posterior log marginale density (modified harmonic mean)…
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.163005e-18.

In marginal_density at 67
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: The support of the weighting density function is not large enough…
MH: I increase the variance of this distribution.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.586269e-18.
In marginal_density at 102
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.592658e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.168076e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.786268e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.834621e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.125031e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.781055e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.221069e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.115450e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.977791e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.016878e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.455596e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.778913e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.844161e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.910434e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 6.745399e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: Let me try again.
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 7.237027e-18.
In marginal_density at 108
In dynare_estimation_1 at 948
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
MH: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.

parameters
prior mean post. mean conf. interval prior pstdev

eta 2.450 2.0414 1.9497 2.1627 norm 0.7500
sigma_c 1.620 1.0338 0.9648 1.0706 norm 0.3750
h 0.690 0.7590 0.7120 0.8417 beta 0.1000
omicron 5.860 3.9508 3.8862 3.9914 norm 2.0000
omega 3.230 5.0411 5.0301 5.0482 norm 0.1000
rho_rn 0.880 0.8121 0.7024 0.8801 norm 0.1000
phi_pi 1.480 1.5601 1.4707 1.7117 norm 0.1000
phi_y 0.080 0.1868 0.1349 0.2436 norm 0.0500
tau 0.660 0.7389 0.6294 0.8507 beta 0.1000
xi 0.870 0.8081 0.7207 0.8520 beta 0.1000
rho_kappa 0.490 0.7700 0.7154 0.8380 beta 0.1000
rho_z 0.750 0.8773 0.7800 0.9973 beta 0.1000
rho_s 0.866 0.8493 0.6923 0.9485 beta 0.1000
rho_a 0.822 0.7998 0.7452 0.8788 beta 0.1000
rho_vphi 0.700 0.8193 0.7023 0.8815 beta 0.1000
rho_g 0.980 0.8206 0.7586 0.9122 beta 0.1000

standard deviation of shocks
prior mean post. mean conf. interval prior pstdev

e_kappa 0.250 0.2957 0.1881 0.3978 invg Inf
e_z 0.250 0.1991 0.1272 0.2554 invg Inf
e_a 0.250 0.3307 0.0653 0.4636 invg Inf
e_s 0.250 0.1772 0.1680 0.1884 invg Inf
e_vphi 0.250 0.2952 0.2244 0.4014 invg Inf
e_g 0.250 0.2269 0.1038 0.3184 invg Inf
Warning: BETAINV did not converge for a = 0.9408, b = 0.0192, p = 0.999.

In betainv at 61
In draw_prior_density at 47
In PlotPosteriorDistributions at 80
In dynare_estimation_1 at 951
In dynare_estimation at 70
In usvs1 at 322
In dynare at 120
Total computing time : 0h00m50s[/code]

Here, it signals problems in estimation before those steps. It looks as if your observables are inconsistent with the model. In particular, your observed variables are huge, but your prior standard deviation for the shocks is tiny. Thus, the likelihood is very small. I would guess it is a scaling issue.

Right; I see: would such envisage a thousandfold downsizing of them all then? Gratitude.

Your observed variables have to be consistent with the model. There is no arbitrariness in scaling.

In what sense are they inconsistent: as to their becomingness within the sphere of the sought parameters estimation, hence qualitatively? Am I thus to select diverse observables? Withal, by too large or huge what does one exactly intend: large or small enough so as to generate my desired IRFs? What does technically demarcate consistency? Thence: could you please elaborate on such remark of yours? Thanks.

In state space systems like the models we deal with, there is an observation equation. This equation describes how the observed data maps into the model variables. Often, we perform log-linearizations of the model. If your data then has a value of 0.1 this corresponds to a 10% deviation from steady state. Given that your observables are in the range of thousands, they are most probably inconsistent with the way you defined your model variables.

Very clear; thanks. Although what is one to specifically seek within the model so as to obtain such consistency? Is one to edit the optimisation problems for each sector so as to consequently alter the motion laws? If so, how does that exactly relate to data magnitude?

In other words: where is one to find this “variables definition” which you are referring to?

Understanding upon model data affiliation has been reached: I am endeavouring to provide becoming observations as per counsels.
Be that as it may, what are the chances of pursuing parameters estimation through the indirect inference simulated method of moments (SMM) in Dynare?

You can do it with Dynare, but not by default. The basic structure you need is outlined at [DSGE based-on IRF from VAR's)

[quote]Re: DSGE based-on IRF from VAR’s
by jpfeifer » Fri Feb 25, 2011 6:12 pm

I am not sure that you cannot directly implement it in Dynare. I guess the way to proceed is the following:

First, design a model you want to fit in Dynare. Just calibrate it to some values and see if it is in principle able to generate the kind of dynamics you are looking for. If it is not able to do so, there is no use to fitting a model to IRFs that it will never be able to generate.

Second, run Dynare with your model file once to generate the IRFs you want to match. Make sure that the IRFs are correctly displayed, and that only the time horizon you want to match is computed. Also, shut off everything in the stoch_simul command, e.g. nofunctions (see manual).

Third, rerun Dynare, but place the line

before the stoch_simul-command. This will save the workspace immediately before stoch_simul is called.

Now, write a set of functions that performs the IRF-matching.

Fourth, program an objective function that takes as inputs the parameters you want to estimate and as output the criterion function that gives the weighted distance between the model and the empirical IRFs. In the objective function, use

global oo_ M_ options_ load level0workspace oo_
to have available the workspace you saved and reset the results-structure in each iteration step. **Then use the Dynare function set_param_value() to set the parameters given to the function into Dynare before computing IRFs. Then extract the program code lines after the evalin command up to and including the stoch_simul command from the m-file created by Dynare in the third step. **
By doing this, your program runs the Dynare solver and generates IRFs which are stored in oo_. (It might be necessary to change the Dynare file stoch_simul in order to prevent each run from displaying the IRFs in a plot). Finally in this file, compare the IRFs generated by Dynare with the set of parameters set through set_param_value with the empirical ones and give as output the value of the criterion function.

Also program an outer file that loads level0workspace saved in step three and then uses a miminizer like fsolve or csminwel to minimize the objective function defined in step four over the parameters you want to estimate.

You may also want to define the empirical IRFs as global variables so they can be accessed in the objective function.
[/quote]

Indeed beholden for such a detailed explanation; although there are a few steps which appear as somewhat overwhelming (in bold) considered alongside the programme’s intricacy.

Seen the increasing ascendancy of such a method with applied macroeconomics, particularly in light of its consistency with log-linearisation as a means to mislay model “Bayesian”/subjective idiosyncrasies, non-linearities in essence, is the Dynare team not capable to generalise such a procedure thence rendering it available to its users as a definite alternative to full information parameter estimation?

We are indeed working on it, but are not there yet.