Could someone give me some helps about the model from Slobodyan & Wouters, 2007, Learning in an estimated medium-scale DSGE model.
- in the step of updating belief, we have 11 endogenous state variables and 9 exogeneous shock, that means beta actually consist of 20 parameters. Also, since we have 12 forward looking variables, we actually have 12x20=240 parameters. Together with an equation of second moments for each parameter, we will have 480 equations which are used for updating belief?
I don’t know whether we can write matrix in Dynare, if not, then we need to write 480 equations?
Meanwhile, those equations for the model are in log-linearised form, but these constant gain learning equation are not. Will this affect Dynare solution?
Are the initial beliefs generated from variance, or autocorrelation (thus covariance) obtained from results under rational expectation?
I just use postier mean, provided by the paper, to calibrate this model, will constant gain learning mechanism work same as paper?
Thank you so much if someone can help.