Estimation with mode_compute = 6: "Error: matrix must be positive definite"

Hello,

I am running an estimation using Occbin and mode_compute = 6. Depending on the exact setup of the problem, I occasionally run into a hard error:

Error using chol
Matrix must be positive definite.

Error in gmhmaxlik_core (line 194)
    dd = transpose(chol(CovJump));

Error in gmhmaxlik (line 100)
    [PostMode, PostVariance, Scale, PostMean] = gmhmaxlik_core(fun, OldPostMode, bounds, gmhmaxlikOptions, Scale, flag, MeanPar, OldPostVariance, varargin{:});

That is, the minimiser makes it all the way to the “last call”, and then fails due to the jumping matrix not being positive definite. Presumably, however, the matrix must have been positive definite while tuning the scale parameter and so on, so this seemed an unusual moment for an error.

It might be that this is the intended behaviour, but if so could I suggest adding some minimal error handling and a user-friendly error message (like the error messages when the initial likelihood is NaN)?

I have never encountered this before. Do you have a code to replicate the problem?

I do, but the runtime is extensive (~36 hours), at least on my computer. Happy to provide it if that’s reasonable for you, but I appreciate you obviously have other committments as well. An alternative is that I could run the process again with a Matlab breakpoint inserted just before the crash, and then save all the internals (M_, oo, and so on) so you don’t need to start the process from scratch, if that would be more helpful.

  1. Yes, setting a breakpoint and saving the workspace would help.
  2. What was the command window output of the first iterations? The error should only happen if the parameter vector does not move at all.

Apologies for the delayed response. Here is the workspace (minus figures) at the breakpoint on line 194 of ghmaxlik_core.m. As before, the operation chol(CovJump) fails due to CovJump not being positive definite (and indeed it isn’t, on inspection). I have left matlab frozen at the breakpoint just in case there is anything else I can provide you with (it is running on a remote server so this isn’t a problem).

My working theory is that the output is too sparse - my model appears to be somewhat unstable and to have no Occbin solution at many parameter combinations which might be causing an issue in calculating the jumping matrix. I am running the model again on a seperate instance of matlab with more robust Occbin settings to see if this makes a difference.

EDIT: in answer to your second question, the console output was:

Initial value of the log posterior (or likelihood): -119266.4049

==========================================================
   Change in the posterior covariance matrix = 5625.
   Change in the posterior mean = 27.1992.
   Current mode = 62614.9277
   Mode improvement = 56651.4772
   New value of jscale = 2.6446e-07
==========================================================

==========================================================
   Change in the posterior covariance matrix = 1.2222e-06.
   Change in the posterior mean = 27.2253.
   Current mode = 115044.9376
   Mode improvement = 52430.0099
   New value of jscale = 3.4625e-14
==========================================================

debug_info_nopics.mat (46.6 KB)

The immediate cause is

most likely due to running into a boundary. The small jumps cause numerical issues. There is only so much one can do about that. I gave it a try at mode_compute=6: make covariance computation numerically more robust (!2068) · Merge requests · Dynare / dynare · GitLab

Many thanks. Is there anything obvious I can do to make the jumps bigger? Or to put it another way, under what conditions does MC=6 converge on microscopic steps?

You will always have a problem when you move very close to a boundary in the parameter space. The same would happen for other optimizers.

1 Like

Hi, I meet the same problem with you. Could you please tell me how did you solve it ?