Computing Steady State with Estimated Parameters

Hello - I am trying to replicate something along the lines of Chari, Kehoe McGrattan (2007): that is, a DSGE model with wedges. These wedges arise from an underlying stochastic process. One reads in data to recover this process (assumed Markov), and then uses the equilibrium conditions of the model to back out the implied wedges.

I’m having a problem defining the steady state of the model. The issue is that the SS values of k, y, c etc. all depend on the SS values of the endogenous wedges. But these in turn are estimated. However, one needs SS values of model variables to do the estimation.

I am trying to use a steady_state_model block to define the SS. I understand that an alternative is to write a steady state file myself, but I’m not sure how this would improve matters. I am having trouble conceptually specifying how dynare can find the SS in this case.

See https://github.com/JohannesPfeifer/DSGE_mod/tree/master/Chari_et_al_2007

Thanks for the reference. As always, your github dynare codes are very useful. The specification of wedges is different in my model (I put them in the firm problem and production function, rather than as taxes on the consumer side).

I wrote my code using yours as a template and with everything log-linearized (including the wedges). I am using CKM’s initial parameter values but set the initial guesses for the wedge means to 0. Because my wedges are different from theirs, I shouldn’t be using their initial values. However, I am just getting my code to run. I can’t get the steady state file to compute a SS. The residuals are 0 in all equations but (2). I find this strange because (2) just defines the value of the labor wedge from my pre-set parameter value. If I play around with the initial values for the wedge means, I get residuals on (2) and (3), which set the labor and capital wedge respectively.

I’m not entirely sure what I’m doing wrong here.

exponentiated.mod (10.4 KB)
exponentiated_steadystate.m (2.7 KB)

Where exactly is the wedge in your equation (2). I had

 psii*exp(c)^siggma/(1-exp(l))=(1-tau_l)*exp(w)

with tau_l being the wedge. You have

exp(w)*exp(c)^(-ssigma) = ppsi*exp(n)^(1/pphi);

You say you put them in the firm problem, but did you adjust the steady state file accordingly?

Yes. I’m using a different utility function. The wedge doesn’t show up in the labor-leisure decision because that decision is undistorted. The firm’s choice of wages is distorted, however, as in the SS file equation (9). I wrote the steady-state file myself, so the equations there correspond to my model.

However, I have managed to get the file running if I don’t log-linearize but just make the model linear in levels. I’m not sure exactly why this is. I have attached the set of files which works.

I have two other questions if you have a moment:
(1) How do you pick good initial values for MLE? I keep getting the message “Warning: Matrix is singular, close to singular or badly scaled. Results may be inaccurate. RCOND = NaN.” Also the estimation routine fails to calculate variances for my estimated parameters (they are just NaN). Based on the plots, it seems to be having particular difficulty with the P0 matrix and all elements of the P matrix (the var-covar of the wedge innovations). Note that I am using the same SS values as CKM despite having a different model.

(2) When you feed the wedges back into the model, why did you log-linearize all of the equations around the model’s steady state? Can’t you just enter these equations directly? E.g. to recover capital, why not just use the capital LOM (rather than its log-linearized counterpart)? And when computing tau_x (lines 329-339 of your code), why are you using the linear solution?

Thanks a lot again!

rough_code_orig.mod (12.2 KB)
rough_code_orig_steadystate.m (2.7 KB)

  1. The data file is missing to run your code
  2. If the model works in levels, but not with exp(), there was most probably a wrong/missing substitution somewhere. In any case, it is usually better to work with auxiliary equations to get a log-linearization. See Question about understanding irfs in dynare
  3. Getting the MLE estimates is hard. It is a lot of trial and error. The challenging part in the beginning is to make sure that the problems do not come from mistakes in the coding.
  4. I was trying to replicate their paper. Did you see the note
* CKM use the linearized model only to extract the investment wedge and the decision rules. All other wedges
 *      are computed based on the original nonlinear model equations. For this purpose, the capital stock is 
 *      initialized at the steady state value in the first period and then iterated forwards. This mod-file also 
 *      shows how to use the Kalman smoother to directly extract the smoothed wedges. As these are based on the 
 *      linearized model, they differ from the ones derived from the nonlinear equations due to Jensen's Inequality.

Thanks! The data file is the same as that used in the CKM replication but with the variables renamed. I am attaching it.

I appreciate the reference for exponentiating the model. But why did you exponentiate z and g, but not tau_l and tau_k?

Sorry, I missed that specific note in CKM, but now I see why you only used the decision rules implied by the stochastic process to recover tau_x. My question is a little different though: to use this linearized decision rule, one needs values for tau_l and tau_a for every period. You get these from the log-linearized resource constraint, production function, and labor FOC. But why do these have to be log-linearized? Can’t you just recover them from the full (nonlinear) model? Put another way, I get why you only use the linear decision rule for tau_x. But why get approximations to the inputs to this rule rather than the inputs themselves?

Data_CKM_orig2.mat (5.9 KB)

  1. Everything is measured in percent. The pseudo-tax rates are already in percent and therefore are not logged again.
  2. Yes, you could do that. But I did not write the CKM-paper. I only follow the approach they used (which seems not entirely based on a consistent approach)