Bump.
I believe I’ve solved the problem but I’m not sure. Let me try to explain how I computed the likelihood function for this model.
For any model that only has one-step-ahead expectations (this might hold true for longer leads but I’m not sure), after log-linearizing the model can be written in the following form
\Gamma_L s_t = \Gamma_R s_t + \Gamma_e s_{t+1}^e + \Gamma_1 s_{t-1}+ \Psi \varepsilon_t \\ y_t =\bar{\mathbf{d}} + \bar{\mathbf{Z}} s_t + \eta_t
We seek a state space representation of the form
s_t = c_t + T_t s_{t-1} + R_t \varepsilon_t \\ y_t = \bar{\mathbf{d}} + \bar{\mathbf{Z}} s_t + \eta _t
In a constant-gain least-squares setup wherein agents have the following perceived law of motion: s_t = a_t + b_t s_{t-1} + \epsilon_t one can write the expectation \hat{E}_{t-1}s_{t+1} = a_{t-1} + b_{t-1} a_{t-1} + b_{t-1}^2 s_{t-1}
We can substitute the expectations into the original model to yield
\Gamma_L s_t = \Gamma_R s_t + \Gamma_e (a_{t-1} + b_{t-1} a_{t-1} + b_{t-1}^2 s_{t-1}) + \Gamma_1 s_{t-1}+ \Psi \varepsilon_t
Re-arranging we obtain
s_t = (\Gamma_L-\Gamma_R)^{-1}\Gamma_e(a_{t-1}+b_{t-1}a_{t-1}) + (\Gamma_L-\Gamma_R)^{-1}(\Gamma_eb^2_{t-1}+\Gamma_1)s_{t-1} + (\Gamma_L-\Gamma_R)^{-1}\Psi \varepsilon_t which yields the state space vector and transition matrix c_t,T_t
T_t = (\Gamma_L-\Gamma_R)^{-1}(\Gamma_eb^2_{t-1}+\Gamma_1) \\ c_t = (\Gamma_L-\Gamma_R)^{-1}\Gamma_e(a_{t-1}+b_{t-1}a_{t-1})
Which allows us to then compute the likelihood function using a Kalman filter