Slobodyan wouters 2012

T_AR2_no4.mod (7.2 KB)
Hello all,

I’m trying to replicate Slobodyan Wouters 2012. The .mod file allows for estimation of a DSGE model with kalman-filter learning. However, I am trying to implement the setup with constant gain least-squares learning. I understand that this requires the user to change the kalman_algo option from 603 to 203, and to include the gain and ro parameter in the params and estimation block. Despite that, I get constant error messages when trying to execute this model.

The Slobodyan/Wouters code requires their version of Dynare. It will not run with standard Dynare versions as the learning rules are not implemented (e.g. 203 or 603)

I understand that, and to that end I am using dynare 3.064 in Matlab 2012b, and I have downloaded and set as the working directory the dynare files supplied from the AEA webpage for this paper, supplied here:

https://www.openicpsr.org/openicpsr/project/114246/version/V1/view

The provided .mod file works fine, but the .mod file does not work if I change the kalman_algo option from 603 to 203

I don’t have that setup, which makes support challenging to impossible. What is the exact error message?

Index in position 1 is invalid. Array indices must be positive integers or logical values.

Error in BetaFromTR (line 25)
XX11 = tmp(ys_list,ys_list);

Error in DsgeLikelihood (line 135)
[betamat,SecondMoments,R_beta] = BetaFromTR(T,R,Q);

Error in initial_estimation_checks (line 21)
[fval,cost_flag,ys,trend_coeff,info] = DsgeLikelihood(xparam1,gend,data);

Error in dynare_estimation (line 597)
initial_estimation_checks(xparam1,gend,data);

Error in T_AR2_no4 (line 338)
dynare_estimation(var_list_);

Error in dynare (line 26)
evalin(‘base’,fname) ;

Error in start (line 1)
dynare T_AR2_no4.mod

This message occurs whether I run in matlab 2019b or 2012b.

I had matlab save ys_list to a .mat file as well and it appears that one of the elements of ys_list is zero, but tmp is a matrix and therefore has no “zero-th” position.

Then you should investigate why ys_list has a 0. This sounds like e.g. a naming issue so that no match was found.

any idea where to start?

I will try to have a look on the weekend. Didn’t run CG myself since 2010.

1 Like

Any help would be greatly appreciated!

Any updates since Friday?

Bump.

I believe I’ve solved the problem but I’m not sure. Let me try to explain how I computed the likelihood function for this model.

For any model that only has one-step-ahead expectations (this might hold true for longer leads but I’m not sure), after log-linearizing the model can be written in the following form

\Gamma_L s_t = \Gamma_R s_t + \Gamma_e s_{t+1}^e + \Gamma_1 s_{t-1}+ \Psi \varepsilon_t \\ y_t =\bar{\mathbf{d}} + \bar{\mathbf{Z}} s_t + \eta_t

We seek a state space representation of the form

s_t = c_t + T_t s_{t-1} + R_t \varepsilon_t \\ y_t = \bar{\mathbf{d}} + \bar{\mathbf{Z}} s_t + \eta _t

In a constant-gain least-squares setup wherein agents have the following perceived law of motion: s_t = a_t + b_t s_{t-1} + \epsilon_t one can write the expectation \hat{E}_{t-1}s_{t+1} = a_{t-1} + b_{t-1} a_{t-1} + b_{t-1}^2 s_{t-1}

We can substitute the expectations into the original model to yield

\Gamma_L s_t = \Gamma_R s_t + \Gamma_e (a_{t-1} + b_{t-1} a_{t-1} + b_{t-1}^2 s_{t-1}) + \Gamma_1 s_{t-1}+ \Psi \varepsilon_t

Re-arranging we obtain

s_t = (\Gamma_L-\Gamma_R)^{-1}\Gamma_e(a_{t-1}+b_{t-1}a_{t-1}) + (\Gamma_L-\Gamma_R)^{-1}(\Gamma_eb^2_{t-1}+\Gamma_1)s_{t-1} + (\Gamma_L-\Gamma_R)^{-1}\Psi \varepsilon_t which yields the state space vector and transition matrix c_t,T_t

T_t = (\Gamma_L-\Gamma_R)^{-1}(\Gamma_eb^2_{t-1}+\Gamma_1) \\ c_t = (\Gamma_L-\Gamma_R)^{-1}\Gamma_e(a_{t-1}+b_{t-1}a_{t-1})

Which allows us to then compute the likelihood function using a Kalman filter

@jthomp10 The learning literature is something I am not very familiar with. Your best hope is indeed to wait for an answer by Sergey (@verdi_green), who wrote the original code.

this is really good thank you for sharing with us

OK, sorry for being so sluggish, too much work.

Yes, indeed, suggestion above is correct - ys_list probably didn’t find index for one of the variables.

In the dynare_estimation.m, the following code is used to create ys_list variable:

if options_.kalman_algo > 100

states = sort(states);
tmp = lgy_(dr.order_var,:);
ys_list = zeros(length(states),1);
for i = 1:size(lgy_,1)
j = strmatch(deblank(tmp(i,:)),states,‘exact’);
if ~isempty(j)
ys_list(j) = i;
end
end

ys_list is initialized as a vector of zeros, therefore, if you have some elements of ys_list=0, this means that the variable state in the beginning included a variable that’s not mentioned in the .mod file.

2 Likes

Since writing this post I’ve combined the likelihood with the prior specification used in Slobodyan Wouters 2012. I used matlab’s fminunc to maximize the posterior density and the value seems reasonable and is somewhat higher (log difference of 10 or so) than the posterior mode for the same model under rational expectations.

I’ve run into a major problem, however: when using csminwel to find the maximum of the posterior density, after around 10 iterations the algorithm guesses a paramteter with NaN entries. I used a numerical hessian calculator used in Herbst + Schorfheide 2014 to calculate the Hessian matrix, but this produced a non-invertible hessian with extremely large diagonal elements(greater than 1e+5 in many cases).

While some authors in the empirical learning literature have given kind of helpful explanations for computing the likelihood function, I have not found any discussion related to finding the hessian matrix to use for a Metropolis Hastings algorithm

may be helpful

Well I know that my measurement equation is not wrong, but how can I diagnose un-identified parameters?

Are you sure it’s a matter of identification? You may be able to run the identification-command or do mode_check-plots.
I linked the above post to suggest using an arbitrary matrix instead of the Hessian.

1 Like