Hello all,
I’m trying to estimate the following model under constantgainleast squares learning, as presented in Milani 2014 and elsewhere:
\pi_t = \beta E_{t1}\pi_{t+1} + \kappa x_t + u_t \\ x_t = E_{t1} x_{t+1}  \sigma(i_t  E_{t1}\pi_{t+1}) + g_t \\ i_t = \rho_t i_{t1} + (1\rho_t)(\chi_{pi,t}\pi_{t1}+\chi_{x,t}x_{t1}) + \varepsilon_{r,t} \\ u_t = \rho_u u_{t1} + \varepsilon_{u,t}\\ g_t = \rho_gg_{t1} + \varepsilon_{g,t}
Wherein agents have the perceived law of motion Z_t = a_t + b_t Z_{t1} + \eta_t, Z_t \equiv (\pi_t,x_t,i_t)' and agents beliefs \phi_t = (a_t,b_t)' are updated according to the scheme \phi_t = \phi_{t1} + \gamma R_{t1}^{1}X_t(y_t  \phi_{t1}'X_t)' \\ R_t = R_{t1} + \gamma (X_tX_t'  R_{t1}), implying that E_{t1}Z_{t+1} = a_{t1} + b_{t1}a_{t1} + b_{t1}^2Z_{t1}
X_t \equiv (1,\pi_{t1},x_{t1},i_{t1})'
Has anyone else coded a likelihood function for a similar model, or perhaps is there a way to use Dynare to estimate a similar model?
kalman.m (3.8 KB)
Under adaptive learning the beliefs themselves enter into the transition equations, and the beliefs evolve over time, so you can write the solution as a statespace system with time varying transition matrices.
To obtain the likelihood function you can use the Kalman filter as usual, modified to include an updating step for the transition matrices which is consistent with your assumptions regarding the expectation formation mechanism.
Slobodyan and Wouters (2012) modified some dynare code to do this with small forecasting models but I think the code doesn’t run anymore after one if the dynare updates. It is available in the Data download from the link above.
So if I need to cast the model with learning into the following statespace representation:
y_t = d_t +Z_t s_t + \varepsilon_t, \varepsilon_t~\sim N(0,H_t) \\ s_t = c_t +T_t s_{t1} + R_t \eta_t ,\eta_t \sim N(0,Q_t) where y_t is the observable vector and s_t is the (possibly) unobserved vector, how could beliefs enter into the transition matrix T_t? I can see why they would enter into c_t, but not T_t
Your specification of a PLM allows you to write down exactly what the forecasting model your agents are using; for example you have written E_{t1}Z_{t+1}= a_{t1} + b_{t1}a_{t1} + b_{t1}^{2}Z_{t1} which means you have already done the hard work of defining the point forecasts E_{t1}\pi_{t+1} and E_{t1}x_{t+1} (which are the only forward looking variables in your model) as functions of the belief coefficients characterized by a_{t1} and b_{t1}.
If you were to substitute out these forwardexpectations in the equations governing your model you will have derived the Actual Law of Motion (ALM) for the economy, which is a function of the Tmap, which is a function of the time t beliefs.
Two useful references here:

Milani (2007) [link], especially section 2

Evans and Honkapohja (2001) [link]