Lik_init=2 vs. lik_init=1

Good evening,
I have the following questions regarding lik_init=1 and lik_init=2 option. I should mention that I am working with a model where all variables are a priori stationary, including the observable variables:

1.) In the manual it says for lik_init=1: For stationary models, the initial matrix of variance of the error of forecast is set equal to the unconditional variance of the state variables. In terms of the Kalman filter equation I attached, do I understand correctly that the manual refers to the initialization of the P matrix? And if yes, does it refer to P_0|0 or to P_1|0? (1 would be the first quarter of the sample).

2.) The analogous question for lik_init=2: Does the manual refer to P_0|0 or to P_1|0?

3.) For lik_init=2, why does the first of the updated values equal zero? I have read the legacy post from Oct 2016, but I am afraid I don’t understand the reply (maybe the answer relates to my questions 1.)

4.) Would lik_init=2 be expected to lead to more volatile updated values?

5.) Does lik_init=2 tend to result in smaller estimated AR(1) coefficients, especially for highly persistent processes? Or is it really not possible to generalize on this topic?

Many thanks for your help!
Ansgar

Example_Kalman.pdf (57.0 KB)

The lik_init option is concerned with the initialization step of the Kalman filter (so yes the initialization of the P matrix), not the updating or forecasting step. So the P matrix you set in the very first step. As you have a stationary model and stationary observables, the lik_init=1 option is best for you as it solves the Lyapunov equation P-TPT’ = RQR’ arising in a state-space system, where P is the variance of the states. And this matrix is then used to initialize the covariance matrix of the normal distribution (the unconditional second moments). On the other hand, lik_init=2 uses a scaled identity matrix. Have a look into dsge_likelihood.m for more details:

switch DynareOptions.lik_init
  case 1% Standard initialization with the steady state of the state equation.
...
    Pstar=lyapunov_solver(T,R,Q,DynareOptions);
...
  case 2% Initialization with large numbers on the diagonal of the covariance matrix of the states (for non stationary models).
...
    Pstar = DynareOptions.Harvey_scale_factor*eye(mm);
...
end

From a practical point of view, I usually use lik_init=1 for stationary models, and lik_init=3 for stationary observables in non-stationary models.

1 Like

Thank you, and sorry for the delay in replying. I understand that this is the theory, but in practice, Smets and Wouters (2007) for instance use lik_init=2, even though they transform all their observables before the estimation in order to stationarize them. I asked Raf Wouters about this and if I understood correctly, his view was that if you have variables which are stationary in the model but are very persistent in the data (not quite stationary, loosely speaking), then lik_init=2 might be superior. I think an example might be the hours-to-population ratio in the US. But he didn’t express a clear view either way, so it seems to be a bit like a grey area (???).
My impression (based on a very limited number of comparisons), was that lik_init matters much more for the smoothed values of the observable variables than the parameter estimates, which appeared quite close.

The Kalman filter recursion do not stipulate how to initialize the recursion. In essence, it’s about specifying your prior belief of the states. People seem to agree that the prior mean should be the steady state. Differences arise on the prior variance. You can either use the unconditional variance (the Standard) or a rather diffuse initial variance (the Harvey way). That explains the statement on very persistent models because there states can be quite for away from the steady state for some time. You may want to reflect this in your initialization. Because the model dynamics are often similar quite independently of the initial values because their effect typically dies out at some point and the impact effect of shocks tends to dominate relative to the mean reversion tendency, the biggest change typically happens for your smoothed state estimates, because your rather diffuse prior allows them to be further away from the steady state initially if the data indicates so.

1 Like

Ah thank you Johannes that is very helpful.