Hi, I’m estimating an already log-linearized model with maximum likelihood (ML).

The ML estimates for some of the parameters seem to be very very close to the initial points that I specify. Is that a sign that these parameters are not well identified given the data and therefore should be calibrated prior to estimation?

I also used mode_check in my estimation command. For most of the parameters, the estimate is at the trough of the parabola, but for some it isn’t. Does this indicate that the program has found a local max rather than a global max? Should I move the initial parmater guesses towards the direction of the troughs?

In general, how do I make sure I obtain estimates with the global max? I don’t get the same results when I start the program from different initial values. Should I be comparing the likelihood in each case? Is there any recipe that people follow to be more or less confident about their ML estimates?

No. A sample can be informative about a parameter even if we find that the estimate is close to the initial condition (if we are lucky or very good in choosing initial conditions). We may think that there is an identification problem when we observe (very) large variances of the ML estimates. In this case we may choose to calibrate some parameters or to change the model.

The mode_check plots are necessary conditions for a local maximum. So if the ML estimate does not match the bottom of the parabola, you can be sure that you do not have a local maximum (or a global maximum consequently).

You should compare the likelihoods associated to different initial conditions. A simpler approach would be to use a Metropolis-Hastings to get the MV estimate (a bayesian approach with uniform prior on all the parameters), or at least a good starting point.