Problems with Maximum Likelihood

I’ve been having trouble estimating a model with maximum likelihood. I’ve successfully simulated the model with plausible parameter values, and I’ve done Bayesian estimation and gotten plausible results. But when I try to do maximum likelihood, I inevitably get some kind of error, usually “POSTERIOR KERNEL OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the “mode” is not positive definite!
=> posterior variance of the estimated parameters are not positive.”

I’ve tried changing initial values, different values for mode_compute, I’ve tried varying the parameters I estimate vs. set (calibrate). As I am a relative Dynare newbie, I’m hoping there is some simple answer. I’m attaching the .mod file and data in .xls. Any suggestions would be greatly appreciated.
autodata_ur.xls (27.5 KB)
invmod6ml_ur_est.mod (1.7 KB)

Maximum likelihood estimation is very difficult if some of your parameters are poorly identified. One thing that you can do is to widen the priors used in the Bayesian estimation and see how far you can go that way. Checking how the posterior is moving in the process is instructive about the role of the priors.

Best

Michel

Thanks for the reply. I’ve tried that to some extent. With Bayesian estimation all the parameters I’ve estimated have posteriors considerably tighter than the priors, so I’m pretty sure they’re identified (or is that not conclusive?). I did try to use uniform priors over fairly wide ranges as a proxy for ML and had similar problems. I have tried fixing some of the parameters and estimating only a subset. I can continue to try to do that.

One thing I corrected in the original .mod file and is not the issue: The original file had a term involving 1/(1-gam) in one of the equations, and ‘gam’ can take on the value of 1 (it’s a CES function). I’ve modified the file to eliminate that term, so there’s no issue of a divide-by-zero problem. I’m posting the revised .mod file. It estimates beautifully with Bayesian methods (though I think mode_compute=6 is necessary) using beta_pdf priors. I have tried using the posterior means as the initial values in ML but I still get either:

POSTERIOR KERNEL OPTIMIZATION PROBLEM!
(minus) the hessian matrix at the “mode” is not positive definite!
=> posterior variance of the estimated parameters are not positive.
You should try to change the initial values of the parameters using
the estimated_params_init block, or use another optimization routine.
Warning: The results below are most likely wrong!

if I don’t specify mode_compute=6, or if I do specify it,

Warning: Matrix is singular, close to singular or badly scaled.
Results may be inaccurate. RCOND = NaN.

In dynare_estimation_1 at 324
In dynare_estimation at 62
In invmod6ml_ur_est at 231
In dynare at 132
??? Error using ==> chol
Matrix must be positive definite.

I will continue to try various things (setting some parameters vs. estimating them, using uniform priors, etc.) to see if something works, but I haven’t found anything yet.
invmod6ml_ur_est.mod (2.07 KB)
invmod6h1_ur_est.mod (2.21 KB)

My advice is not to use immediately uniform priors, but to increase the standard deviation of the priors that give good results in the first Bayesian attempt. But be careful, not to overdo it and get weird shapes for gamma and beta prior densities.

Best

Michel

Thanks, I will try that. But I may have discovered the source of the problem, which is one parameter that may not be well identified. When I set it rather than estimate it I’m finally able to get ML to run. Unfortunately it’s a really important parameter, and my intuition was that it should be identified. :frowning: But at least I was finally able to generate some ML estimates of other parameters conditional on that one.

Just to follow up, in case anyone encounters a similar problem: I was able eventually to estimate essentially all of the parameters by working my way toward better initial values by sequentially estimating increasing subsets of the parameters. It’s a trial-and-error process, and the key parameter I was interested in that I thought was causing a problem turned out to be very precisely estimated.

I have similar issue with the newest version of dynare. The discussion is very helpful. However, I have a related question. In my exercise, I found sometimes the optimization routine gives me two different modes: one with a lower likelihood funcation value but nice hessian, another with higher likelihood but troublesome hessian. If that is the case, should I search around the second mode or the first mode?

A positive definite Hessian indicates that you are at a local minimum of the objective function (a local maximum of the posterior density/likelihood function). If for the first mode the likelihood is lower than at some other point, it is a local optimum, but not a global one. However, the global optimum will not be in the neighborhood of that point.
Remember that for problems where the mode is hard to find, you can try mode_compute=6

Best

Michel