Branch and McGough (2009) Determinacy Analysis

Dear all,

I tried to replicate the Determinacy Analysis of the paper: “A New Keynesian model with heterogeneous expectations” written by Branch and McGough (2009).

When I change the parameters \alpha and \theta as in Figure 1 and 2 of the paper, I obtain different results (except that for \alpha = 1 I obtain a roughly similar result).

My Octave code is attached below.

Thanks in advance.determinacy_analysis.m (1.8 KB)

Dear all,

in my determinacy analysis I obtain more or less the same result for both, adaptive expectations \theta=0.9 and extrapolative expectations \theta = 1.1.
Intuition suggestes me that this must be wrong. The determinacy area (white) should shrink as \theta increases.
This is also the result of Branch and McGough (2009), p.1047-1048.

In the PDF below the different colors of the determinacy analysis refer to:
white = determinacy
gray = order one indeterminacy
black = order two indeterminacy
Determinacy Analysis.pdf (114.7 KB)

On the other hand, my code is so simple that I can not see any mistake on my side.
Here the code: determinacy_analysis.m (2.0 KB)

Is there perhaps a mistake in the paper?

Please help.

I found out that there might be something at odds with the qz command in Octave 4.2.1.
See: https://savannah.gnu.org/bugs/?50846
Is this causing my strange results?

Best regards,
Max

This is unlikely. What should I run in Matlab to check against your Octave results?

Please run the code two times.
First, just run the code. The result should look like in Figure 1 of the pdf.
Second, change the parameter “thet” in line 7 to 1.1 (=extrapolative expectations) and rerun.
The result should look like in Figure 2 of Branch and McGough (2009, p. 1048).
The white area should become smaller under extrapolative expectations.
My figure 2 indicates the opposite.

I strongly mistrust the computation of the eigenvalues in line 38 and 39 of my code.

I am confused by Fig. 2 of Branch and McGough (2009, p. 1048) for such a long time that it is now time to clarify whether this figure is right or wrong.

For the determinacy analysis the model is simply given by:
\begin{align} y_t &= \alpha E_t y_{t+1} + (1-\alpha)\theta^2y_{t-1} - \frac{1}{\sigma}(\chi_y E_t y_{t+1}+\chi_\pi E_t \pi_{t+1} - \alpha E_t\pi_{t+1} -(1-\alpha)\theta^2\pi_{t-1}) \\ \pi_t &= \beta\alpha E_t\pi_{t+1} + \beta(1-\alpha)\theta^2\pi_{t-1} + \lambda y_t \end{align}
which can be rearranged as
\begin{align} y_t &= \left(\alpha-\frac{\chi_y}{\sigma}\right)E_t y_{t+1} + \left(\frac{\alpha-\chi_\pi}{\sigma}\right)E_t \pi_{t+1} + (1-\alpha)\theta^2y_{t-1} + \frac{(1-\alpha)\theta^2}{\sigma}\pi_{t-1} \\ -\lambda y_t + \pi_t &= \beta \alpha E_t\pi_{t+1} + \beta(1-\alpha)\theta^2\pi_{t-1} .\end{align}
Casting this into the matrix form (equation 23 on p. 1046)
\begin{align} F x_t = BE_tx_{t+1} + Cx_{t-1}; x = \begin{pmatrix}y & \pi \end{pmatrix}'\end{align}
should give the matrices:
\begin{align} F = \begin{pmatrix} 1 & 0 \\ -\lambda & 1\end{pmatrix} ; B= \begin{pmatrix} \alpha-\frac{\chi_y}{\sigma} & \frac{\alpha-\chi_\pi}{\sigma} \\ 0 & \beta\alpha \end{pmatrix} ;C=\begin{pmatrix} (1-\alpha)\theta^2 & \frac{(1-\alpha)\theta^2}{\sigma} \\ 0 & \beta(1-\alpha)\theta^2\end{pmatrix} .\end{align}
For the determinacy analysis I just have to check the Blanchard Kahn (1980) condition for the matrix
\begin{align} M = \begin{pmatrix}B^{-1}F & -B^{-1}C \\ I_2 & 0_{2 \times 2}\end{pmatrix}\end{align}
which is eq. (24) in the paper.

In my code I decided to rely on the from AE_tx_{t+1} = B x_t instead and used generalized Schur. But no matter which representation I use, the results differ from Fig. 2 in BranchMcGough(2009).

Here the code relying on matrix M and the eig() command: determinacy_analysis.m (1.9 KB)
Somewhat surprisingly results with matrix M and the eig() command are a bit different.

This .m file makes the different outcomes between generalized Schur and eig() clear: determinacy_analysis_genSchur_vs_eig.m (4.4 KB)

Is there something wrong with the computation of eigenvalues in Octave 4.2.1?
Or am I doing a mistake?
Or is the paper wrong?

  1. Matlab gives pretty much the same results as Octave as far as I can see. The white region does not expand on Matlab as well.
  2. Your codes return on Matlab:
Results based on generalized Schur:
For alpha = 1: (det,instab,Or1ind,Or2ind) = 3325      0  27992   8683
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 3522      0  33135   3343
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 5155      0  34845      0
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 5722      0  34276      2
Results for the eig() command
For alpha = 1: (det,instab,Or1ind,Or2ind) = 3325      0  27990   8685
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 3522      0  33135   3343
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 5155      0  34845      0
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 5722      0  34276      2

So the only difference between qz and eig appear for alpha=0, which when there will be two zeros in C. That seems to be a pathological case where qz may perform better. But this seems unlikely to indicate a bigger problem.
2. Given that alpha is rather big, the effect of theta should be rather small. For debugging purposes, I would focus on the cases where you would expect something to happen. That being said, the entry (2,2) of C in your code is \beta(1-\alpha)\sigma^2 and not \beta(1-\alpha)\theta^2 as in your formula

Thank you very much Prof. Pfeifer.

On Octave 4.2.1 the code returns:

Results based on generalized Schur:
For alpha = 1: (det,instab,Or1ind,Or2ind) = 2572 0 28341 9087
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 2762 0 33483 3755
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 4352 0 35397 251
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 5296 0 34685 19

Results for the eig() command
For alpha = 1: (det,instab,Or1ind,Or2ind) = 3325 0 27990 8685
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 3522 0 33135 3343
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 5155 0 34845 0
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 5722 0 34276 2

So in my case there is something at odds with qz().

Please explain your point (2.) in more detail. I don’t get your message. Thanks in advance.

Best regards,
Max

  1. I completely missed that you are not using the most recent Octave version. Indeed, you should be affected by the bug above you linked. Please upgrade or use the eig-version of your code
  2. My point was that for the bottom left entry of your matrix C the TeX-formula above and your code differ. In your code \theta does not enter.

Thank you very much. The wrong \sigma in C(2,2) was totally my fault.
Thank you very much Prof. Pfeifer, you saved my day!

The code now replicates Fig. 2 of Branch McGough 2009, when using the eig() command and matrix M. For the qz() command it is still wrong due to reasons in Octave 4.2.1?
What is the latest version of Octave with which Dynare is compatible?

At the current time:

I will now install Octave from your link and then Dynare 4.5.6.

During the installation of Octave 4.4.0 I obtained the warning, that version 4.4.0 has not jet being tested under Windows 10.
Running a simple mod file for a linear model as this one TAR_commitment_HE.mod (3.2 KB)
seems to work fine with Dynare 4.5.6 and Octave 4.4.0 under Windows 10.

If I should try a more difficult task for testing Dynare/Octave please let me know.

But for my code above( correcting the wrong \sigma in both C matrices), Octave still has difficulties with qz(). I obtain under Octave 4.4.0 the result:
Results based on generalized Schur:
For alpha = 1: (det,instab,Or1ind,Or2ind) = 2572 0 28341 9087
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 2539 0 26818 10643
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 2141 0 11702 26157
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 3257 0 3903 32840

Results for the eig() command
For alpha = 1: (det,instab,Or1ind,Or2ind) = 3325 0 27990 8685
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 3278 0 26331 10391
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 2746 0 11351 25903
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 2300 0 5246 32454

Best regards,
Max

Unfortunately, Octave 4.4.0 is still producing bad results with the qz() command. For a large extend of my determinacy analysis I have to rely on generalized Schur since matrix A in AE_tx_{t+1} = Bx_t is not invertible.
Any ideas how to fix the problem with Octave?
Or should I rely on Dynare for my determinacy analysis? Is Dynare more robust?

By the way, Octave 4.4.0 in combination with Dynare 4.5.6 under Windows 10 seem to behave instable. Sometimes when I call a mod file from the Octave command window, everything breaks down. Relaunch everything 2 minutes later and it works. I never experienced something before.

  1. Can you provide us with a specific example/parameterization that produces a problem in Octave. We could then further investigate the issue and potentially file a bug report with the Octave developers. Maybe @sebastien has something to add as well.
  2. Relying on Dynare may be an alternative. Dynare comes with a mjddges.mex file in the mex/octave folder that provides a compiled version of the ordered Schur. It is called with
    [err, ss, tt, w, sdim, dr.eigval, info1] = mjdgges(E, D, DynareOptions.qz_criterium, DynareOptions.qz_zero_threshold);

See also the header in matlab\qz
3. I have also experienced instabilities with Octave. The problem here seems to be with Octave itself and not be reproducible, so I don’t know what to do.

W.r.t. 1.) Try: genSchur_vs_eig.m (4.6 KB)
on my PC it gives:
Results based on eig() command:
For alpha = 1: (det,instab,Or1ind,Or2ind) = 3325 0 27990 8685
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 3278 0 26331 10391
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 2746 0 11351 25903
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 2300 0 5246 32454

Results based on generalized Schur:
For alpha = 1: (det,instab,Or1ind,Or2ind) = 2572 0 28341 9087
For alpha = 0.99: (det,instab,Or1ind,Or2ind) = 2539 0 26818 10643
For alpha = 0.9: (det,instab,Or1ind,Or2ind) = 2141 0 11702 26157
For alpha = 0.8: (det,instab,Or1ind,Or2ind) = 3257 0 3903 32840

Figures look also different.

Dear Max1,

Your problem comes from an incompatibility between MATLAB and Octave in the qz() command.

More precisely, MATLAB and Octave do not follow the same rules in order to select between the real and the complex QZ decomposition. MATLAB by default does the complex one (and does the real one only if you give 'real' as the 3rd argument). Octave decides depending on the type of the two matrices: it will do a real decomposition if you give it real matrices, and a complex decomposition if you give it complex matrices.

Here, you want the complex decomposition, but Octave chooses the real one because you gave it real matrices. Hence your bug. You can fix this by modifying line 92 of genSchur_vs_eig.m as follows:

[S,T,Q,Z] = qz(complex(A),complex(B));

Note that is my intention to fix this incompatibility by making Octave behave as MATLAB. Unfortunately it is now too late to have it fixed in Octave 5 (which will be released soon), so the fix will most likely be in Octave 6 (to be released in 2020). In the meantime, you have to remember to use the above workaround.

Best,

1 Like

Dear @sebastien,

thank you very much for this hint.
I know about this dissimilarity between Matlab and Octave (that Matlab uses the complex and Octave the real qz decomposition), but I did not expect that the real qz decomposition is the wrong choice.

If your time allows, please tell me why I am looking for the complex decomposition and not the real one. Is your workaround also necessary for computing the IRF’s and moments of a RE-DSGE, if I programm in Octave directly?

Best regards, (and happy new year)
Max

Please have a look at Klein (2000): “Using the generalized Schur form to solve a multivariate linear rational
expectations model”, particularly page 1411.
For solving DSGE models it is usually sufficient to do the real QZ decomposition. The problem then is getting the eigenvalues, which are not the ratio of the diagonal elements in case of the real QZ decomposition, but rather generalized eigenvalues of pairs of 2 by 2 blocks in S and T. That is your problem. You compute the eigenvalues as if you were dealing with complex QZ decomposition with complex triangular matrices, but you are actually doing the real QZ that results in real quasitriangular matrices.

1 Like

Thank you very much and sorry for the late reply.
My dog died, thus my late answer.