Of irritant Hessian matrices and harmonic mean estimators!

Hello,

My model keeps giving one of these two error messages - ‘There’s probably a problem with the modified harmonic mean estimator’ or ‘(minus) the hessian matrix at the “mode” is not positive definite!’ I looked up previous threads and tried everything I could find in this forum (a) different mode_computes: 4,6,8,9 (b) reduced the number of parameters I was estimating © changed prior distributions of some of the parameters (d) toyed with initial values for parameters different from prior means (e) included ‘mcmc_jumping_covariance=identity_matrix’ in the estimation command.

The model runs fine when I run a calibrated version of it, so I am at my wits end.

I am estimating around 20-25 parameters, my system has 10 shocks (and hence 10 variables). Each variable has 57 observations. Anything amiss?

Would be grateful if someone could guide me to the right course of action.

Thanks!

This sounds like a problem with estimation. Is there anything unusual with the posterior estimates? NaN or at the boundary? If yes, check your observation equations.

Thanks Johannes for your reply. My observation equations are quite straightforward given that my model does not have a trend, and neither do my observed variables. So I have simply equated each observed variable to their corresponding counterpart in the model. Prima facie the posterior estimates look okay, except that at one point it says ‘???Index exceeds matrix dimensions’. So with a 1000 replications, the whole snapshot of the error message is posted below:


Estimation::marginal density: Let me try again.
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 6.487123e-018.

In marginal_density at 111
In dynare_estimation_1 at 799
In dynare_estimation at 89
In Bayesian_For_Display at 595
In dynare at 180
Estimation::marginal density: There’s probably a problem with the modified harmonic mean estimator.

ESTIMATION RESULTS

Log data density is -Inf.
posterior_moments: There are not enough draws computes to compute HPD Intervals. Skipping their computation.
posterior_moments: There are not enough draws computes to compute deciles. Skipping their computation.

parameters
prior mean post. mean 90% HPD interval prior pstdev

rho_a 0.750 0.7657 0.6789 0.9672 beta 0.1000
rho_c 0.750 0.6736 0.5891 0.8392 beta 0.1000
rho_i 0.750 0.9850 0.9712 0.9963 beta 0.1000
rho_l 0.750 0.3084 0.2424 0.4475 beta 0.1000
rho_g 0.750 0.7616 0.6960 0.8660 beta 0.1000
rho_f 0.750 0.4767 0.2766 0.5874 beta 0.1000
rho_mark 0.750 0.3621 0.1423 0.5110 beta 0.1000
rho_yf 0.750 0.6691 0.5903 0.7348 beta 0.1000
rho_pie_star 0.750 0.5881 0.4572 0.7911 beta 0.1000
rho_t 0.750 0.7571 0.6182 0.9024 beta 0.1000
mu_r 0.750 0.8639 0.5977 0.9712 beta 0.1000
mu_h 1.500 2.6424 2.4477 2.8511 unif 0.8660
mu_f 1.500 2.6758 2.3646 2.8441 unif 0.8660
mu_e 1.500 1.0984 1.0094 1.3386 unif 0.8660
mu_gdp 0.000 -0.0860 -0.1739 0.0203 unif 0.5774
phi_gdp 1.500 1.6567 1.4008 1.7742 unif 0.8660
phi_b 1.500 0.6914 0.6201 0.8438 unif 0.8660
phi_pd 1.500 0.1379 0.0508 0.1849 unif 0.8660
phi_gdp_t 1.500 0.6442 0.5940 0.7165 unif 0.8660
phi_b_t 1.500 1.7637 1.7310 1.8030 unif 0.8660
phi_pd_t 1.500 1.2622 1.1335 1.3395 unif 0.8660
??? Index exceeds matrix dimensions.

Error in ==> DsgeSmoother at 80
constant = SteadyState(bayestopt_.mfys);

Error in ==> dynare_estimation_1 at 836
[atT,innov,measurement_error,updated_variables,ys,trend_coeff,aK,T,R,P,PK,decomp] =
DsgeSmoother(xparam1,dataset_.info.ntobs,dataset_.data,dataset_.missing.aindex,dataset_.missing.state);

Error in ==> dynare_estimation at 89
dynare_estimation_1(var_list,dname);

Error in ==> Bayesian_For_Display at 595
dynare_estimation(var_list_);

Error in ==> dynare at 180
evalin(‘base’,fname) ;

Please provide the files.

Please find attached…
Final2.xls (33.5 KB)
Bayesian_For_Display.mod (9.77 KB)

First of all, you try to estimate rho_l, but it is not in the model at all and thus not identified.

More importantly, the observation equations are wrong as I already guessed. Take cons_c. It has steady state 3.29114, but your data cons_c has mean -0.000277193. Looking at the data, what is a gdp_c value of 593943.4 supposed to mean. How are the units of this observed data variable comparable to the model?

Moreover, your model seems to be in percentage deviations from steady state, but for some reason the steady state is not 0 for these variables. This suggests there is an issue with the model.

Thanks Johannes…immensely helpful. Can I check with you on the issue of non-zero steady states - in your opinion, does this generally come from a problem with incorrect parameter values calculated at steady state, or simply wrong algebraic computation of the log-linearisation equations. That way, I would know what to look for as I revisit my model.

To answer your question on the absurd values that my data was giving me - what I had done was simply passed absolute values of all the variables (consumption, investment, etc.) through the Hodrick Prescott filter…and thought voila, here’s a series that is without trend, seasonality and is perfectly cyclical, just as the case demands in my model! But I realise that my thinking was wrong - I will read thoroughly your manual on observation equations to see how I can make this more sensible.

Unless you deliberately want the constant part of your variables to show up (done e.g. sometimes in Gali’s textbook), it means your (log)-linearization is wrong. Regardless of parameters, percentage deviations from steady state must be 0 in steady state.

Johannes…I have to compliment you on a very neatly written manual! I followed your instructions on observation equations carefully, and to my childish delight…the model worked! Can I ask a couple of follow up questions:

(a) In your manual, you mention that for output (Y) in a log linearized model, it is advisable to consider Y/N, i.e. per capita income. While I understand the rationale, in many emerging economies, data on labour is quite scarce - in some cases declared once in almost a decade. In such case, can we ‘cheat’ with taking log of Y instead, and go ahead with the chores of seasonal adjustment, de-meaning, etc.?

(b) In my model, after running everything successfully, for Identification, I am getting an error on collinearity (full error message posted below). The ‘phi_’ parameters in the error message below, pertain to fiscal rules in my model - phi_g and phi_t are feedback parameters for government spending and tax rules respectively, i.e. g=phi_gg(-1)+(1-phi_g)(phi_bb+phi_pdpd…).

Could I possibly get your view on this…

==== Identification analysis ====

Testing prior mean


Parameter error:
The model does not solve for prior_mean with error code info = 7

Try sampling up to 50 parameter sets from the prior.
Evaluating simulated moment uncertainty … please wait
Doing 465 replicas of length 300 periods.
Simulated moment uncertainty … done!

All parameters are identified in the model (rank of H).

WARNING !!!
The rank of J (moments) is deficient!

[theta_i,eta_c] are PAIRWISE collinear (with tol = 1.e-10) !

phi_g is collinear w.r.t. all other params!
phi_t is collinear w.r.t. all other params!
phi_b is collinear w.r.t. all other params!
phi_pd is collinear w.r.t. all other params!
phi_b_t is collinear w.r.t. all other params!
phi_pd_t is collinear w.r.t. all other params!
??? Error using ==> bar at 57
Only complex vectors are supported.

Error in ==> plot_identification at 74
bar(log([idehess.deltaM(is) idehess.deltaM_prior(is)]))

Error in ==> dynare_identification at 352
plot_identification(params,idemoments_point,idehess_point,idemodel_point,idelre_point,advanced,parameters,name,IdentifDirectoryName);

Error in ==> Bay1_Markup_tax at 594
dynare_identification(options_ident);

Error in ==> dynare at 180
evalin(‘base’,fname) ;

a) N in my guide is not labor, but the number of people in the labor force. Alternatively, you could use the population to scale variables per capita. Just taking output is not desirable if there is large population growth.

b) I would need to see the mod-file. But it looks as if all parameters could be theoretically identified (the message about H), but not with the observed data you are using (the message about J). If you observed more data, it would work.

Thanks for this Johannes. Please find attached the relevant files…
Final.xls (36.5 KB)
Bay1_Markup_tax_forum.mod (9.74 KB)

My conjecture was correct. If you add

to the observables, the error message is gone. Thus, your currently observed data is not sufficiently informative to separately identify theta_i and eta_c.

Thanks Johannes for your reply. I am a bit confused - unless I am mistaken, by ‘observables’, you mean the varobs block? I have data for only one consumption variable, called c_obs, which in my measurement equation block, I’ve equated to c_hat. So at present there is no supporting data for c_h_hat; unless of course if you are suggesting that I equate c_obs to c_h_hat instead. Alternately, if you are suggesting that I simply add c_h_hat variable in the varobs block regardless of the fact that there is no observable data series corresponding to it…I am not quite sure I understand how that is allowed.

This was just a diagnostical example. I am saying that from the current observable data series there is no way to identify these two parameters. You need more data series.