Potential (Numerical) Problem with the Diffuse Kalman Filter

Hi,

People told me to post here, but I am not sure if this is the correct place to report this. Let me know if I should direct it somewhere else.

TL;DR:

I think line 157 of <matlab/kalman/likelihood/univariate_kalman_filter_d.m> should be updated to guard against the lost of positive definiteness of the variance matrix when you run a regular (non-square root) kalman filter. For a short-term fix, you can just check whether the quantity is negative before checking whether it is zero, and give a large penalty to the likelihood value when it is negative.

Detail:
My colleagues have been having difficulties with estimation whenever they tried to include unit roots. They always found that it would find one point where the likelihood value is good, but the likelihood value falls off a cliff if you try to deviate any small amount from the parameters found (and the model behaviors were terrible for the parameters values found).

I believe I have isolated it to a potential numerical problem interacting with the implementation decision on line 157 (the line where you check whether the observation is informative in the univariate Kalman filter) of the file I mentioned above. The problem is when you update the variance matrices directly (instead of update the sqaure-roots), you can (numerically) lose the positive-definiteness of the variance matrix. This is why there are square-root Kalman filters. This numerical problem interacts with line 157 as it checks whether you have rank zero by check the raw value (not absolute value) against a small number. Hence, when you lose positive-definiteness through numerical problems, it will update the likelihood as if it is equivalent to no observation, i.e., it is going to give it too high value for the likelihood. Hence, the optimization/estimation will active find numerically ill-behaved regions if there are any.

I do not have an exact replication problem, but I have tested a numerically guarded implementation on problems that my colleagues told me they had problems with in the past. I get a nice posterior distribution even around unit and explosive roots. It is not an exact replication because I didn’t run things on Dynare side to confirm, but colleagues have been running into the same behavior at my institution for a decade or so: Whenever they tried (Canova, 2014) with unit-roots in the unmodelled block, the estimation didn’t “work” with the behavior I described above. My test was on one of the models where they have tried and failed to introduce unit roots.

In the long-term, I believe a diffuse univariate Kalman filter with square-root updating should be implemented. I have implemented (Bierman-Thornton, 1977) for the square-root updating, and that’s the implementation I have tested with. I can probably contribute my code if needed as well (after cleaning things up. But it is a clean(?) implementation of BT1977 anyway, so it can be implemented directly from that paper). In the short-term, you can address this buy checking whether you lost the positive-definiteness and quit the Kalman filter early and return a bad likelihood value (with a disclaimer that this short-term fix is coming from theory and I haven’t tested it. However, the fix will only affect the estimation only if you lose the positive-definiteness due to numerical problems).

PS- Disclaimer: I traced the if-else tree to confirm that the diffuse kalman filter case would end up in that file in Dynare 5.6, but I haven’t traced the if-else tree in Dynare 6.0, so things might have been indirectly fixed in version 6.0. I have confirmed that that line remains in Dynare 6.0.

Thanks for reporting this. Did you run the univariate filter deliberately? Or did you use the multivariate one and it triggered the univariate one due to

~all(abs(Finf(:)) < diffuse_kalman_tol)   

?

I don’t think the univariate filtering was forced by setting. (Sorry about “I don’t think.” The tests in Dynare were by colleagues, and they did test different flags for kalman I believe.)

The multivariate version requires invertibility a matrix. I think it is denoted F_\infty in the Durbin-Koopman book/paper, so I will assume that the line you quoted checks that though it doesn’t check purely for the invertibility in that line. I believe that the invertibility of the F_\infty is usually not satisfied when you join a stable DSGE block with a statistical block that contain unit roots. In those cases, you would/should fall back to the univariate one.

Thanks. Do your colleagues maybe have an example triggering the issue? That would be very helpful. We can treat it as confidential, if necessary.
Also, a first fix is at 🐛 univariate_kalman_filter_d.m: discard pathological parameter draws that... (!2280) · Merge requests · Dynare / dynare · GitLab

I knew that you would want a minimal example! I have been postponing writing this until I can cook up a minimal example, and a (different) colleague told me to post it now anyway. I am a bit busy, but I will try to cook up a minimal example in the future. The nice thing is that the short-term fix does not affect anything until you lost the positive-definiteness, so it is, at worst, harmless.

I will forward the updated file to my colleagues and ask them to test things out. However, people are cyclically busy, so it isn’t the best time. I will also remind them when they are less busy.

PS- I only linked one file, but a similar line exists in another file with a similar name in the kalman folder. I only named one of them since I didn’t know that you would update it so fast! Thank you :smile: You should be able to find it easily if you haven’t done so already. Let me know if you want me to link the line number and filename of that file as well.

  1. Actually, I don’t need a minimal example, just any example triggering that part of the code to see (and better understand) what was going on before.
  2. I found the second file and pushed a fix. Thanks for pointing this out. Usually, the issue should be fixed in Dynare 6.1.
1 Like